damecco.tex
changeset 2 20d1b0e5838f
parent 1 76dc74f824ba
child 3 550f81b2f81c
     1.1 --- a/damecco.tex	Fri Nov 18 16:45:03 2016 +0100
     1.2 +++ b/damecco.tex	Tue Nov 22 08:15:16 2016 +0100
     1.3 @@ -47,6 +47,34 @@
     1.4  %% for the whole article with \linenumbers.
     1.5  %% \usepackage{lineno}
     1.6  
     1.7 +\usepackage{amsmath}
     1.8 +%% \usepackage[pdftex]{graphicx}
     1.9 +
    1.10 +\usepackage{pgfplots}
    1.11 +\pgfplotsset{width=9cm}
    1.12 +\pgfplotsset{compat=1.8}
    1.13 +
    1.14 +\usepackage{caption}
    1.15 +\usepackage{subcaption} 
    1.16 +
    1.17 +\usepackage{algorithm}
    1.18 +\usepackage{algpseudocode}
    1.19 +\usepackage{tikz}
    1.20 +
    1.21 +\usepackage{amsthm,amssymb}
    1.22 +\renewcommand{\qedsymbol}{\rule{0.7em}{0.7em}}
    1.23 +
    1.24 +\newtheorem{theorem}{Theorem}[subsection]
    1.25 +\newtheorem{corollary}{Corollary}[theorem]
    1.26 +\newtheorem{claim}[theorem]{Claim}
    1.27 +
    1.28 +\newtheorem{definition}{Definition}[subsection]
    1.29 +\newtheorem{notation}{Notation}[subsection]
    1.30 +\newtheorem{example}{Example}[subsection]
    1.31 +\usetikzlibrary{decorations.markings}
    1.32 +\let\oldproofname=\proofname
    1.33 +%% \renewcommand{\proofname}{\rm\bf{Proof:}}
    1.34 +
    1.35  \journal{Discrete Applied Mathematics}
    1.36  
    1.37  \begin{document}
    1.38 @@ -147,8 +175,1201 @@
    1.39  %% \linenumbers
    1.40  
    1.41  %% main text
    1.42 -\section{}
    1.43 -\label{}
    1.44 +\section{Introduction}
    1.45 +\label{sec:intro}
    1.46 +
    1.47 +In the last decades, combinatorial structures, and especially graphs have been considered with ever increasing interest, and applied to the solution of several new and revised questions.
    1.48 +The expressiveness, the simplicity and the studiedness of graphs make them practical for modelling and appear constantly in several seemingly independent fields.
    1.49 +Bioinformatics and chemistry are amongst the most relevant and most important fields.
    1.50 +
    1.51 +Complex biological systems arise from the interaction and cooperation of plenty of molecular components. Getting acquainted with such systems at the molecular level has primary importance, since protein-protein interaction, DNA-protein interaction, metabolic interaction, transcription factor binding, neuronal networks, and hormone signaling networks can be understood only this way.
    1.52 +
    1.53 +For instance, a molecular structure can be considered as a graph, whose nodes correspond to atoms and whose edges to chemical bonds. The secondary structure of a protein can also be represented as a graph, where nodes are associated with aminoacids and the edges with hydrogen bonds. The nodes are often whole molecular components and the edges represent some relationships among them.
    1.54 +The similarity and dissimilarity of objects corresponding to nodes are incorporated to the model by \emph{node labels}.
    1.55 +Many other chemical and biological structures can easily be modeled in a similar way. Understanding such networks basically requires finding specific subgraphs, which can not avoid the application of graph matching algorithms.
    1.56 +
    1.57 +Finally, let some of the other real-world fields related to some variants of graph matching be briefly mentioned: pattern recognition and machine vision \cite{HorstBunkeApplications}, symbol recognition \cite{CordellaVentoSymbolRecognition}, face identification \cite{JianzhuangYongFaceIdentification}.
    1.58 +\\
    1.59 +
    1.60 +Subgraph and induced subgraph matching problems are known to be NP-Complete\cite{SubgraphNPC}, while the graph isomorphism problem is one of the few problems in NP neither known to be in P nor NP-Complete. Although polynomial time isomorphism algorithms are known for various graph classes, like trees and planar graphs\cite{PlanarGraphIso}, bounded valence graphs\cite{BondedDegGraphIso}, interval graphs\cite{IntervalGraphIso} or permutation graphs\cite{PermGraphIso}.
    1.61 +
    1.62 +In the following, some algorithms based on other approaches are summarized, which do not need any restrictions on the graphs. However, an overall polynomial behaviour is not expectable from such an alternative, it may often have good performance, even on a graph class for which polynomial algorithm is known. Note that this summary containing only exact matching algorithms is far not complete, neither does it cover all the recent algorithms.
    1.63 +
    1.64 +The first practically usable approach was due to \textbf{Ullmann}\cite{Ullmann} which is a commonly used depth-first search based algorithm with a complex heuristic for reducing the number of visited states. A major problem is its $\Theta(n^3)$ space complexity, which makes it impractical in the case of big sparse graphs.
    1.65 +
    1.66 +In a recent paper, \textbf{Ullmann}\cite{UllmannBit} presents an improved version of this algorithm based on a bit-vector solution for the binary Constraint Satisfaction Problem.
    1.67 +
    1.68 +The \textbf{Nauty} algorithm\cite{Nauty} transforms the two graphs to a canonical form before starting to check for the isomorphism. It has been considered as one of the fastest graph isomorphism algorithms, although graph categories were shown in which it takes exponentially many steps. This algorithm handles only the graph isomorphism problem.
    1.69 +
    1.70 +The \textbf{LAD} algorithm\cite{Lad} uses a depth-first search strategy and formulates the matching as a Constraint Satisfaction Problem to prune the search tree. The constraints are that the mapping has to be injective and edge-preserving, hence it is possible to handle new matching types as well.
    1.71 +
    1.72 +The \textbf{RI} algorithm\cite{RI} and its variations are based on a state space representation. After reordering the nodes of the graphs, it uses some fast executable heuristic checks without using any complex pruning rules. It seems to run really efficiently on graphs coming from biology, and won the International Contest on Pattern Search in Biological Databases\cite{Content}. 
    1.73 +
    1.74 +The currently most commonly used algorithm is the \textbf{VF2}\cite{VF2}, the improved version of VF\cite{VF}, which was designed for solving pattern matching and computer vision problems, and has been one of the best overall algorithms for more than a decade. Although, it can't be up to new specialized algorithms, it is still widely used due to its simplicity and space efficiency. VF2 uses a state space representation and checks some conditions in each state to prune the search tree.
    1.75 +
    1.76 +Our first graph matching algorithm was the first version of VF2 which recognizes the significance of the node ordering, more opportunities to increase the cutting efficiency and reduce its computational complexity. This project was initiated and sponsored by QuantumBio Inc.\cite{QUANTUMBIO} and the implementation --- along with a source code --- has been published as a part of LEMON\cite{LEMON} open source graph library.
    1.77 +
    1.78 +This thesis introduces \textbf{VF2++}, a new further improved algorithm for the graph and (induced)subgraph isomorphism problem, which uses efficient cutting rules and determines a node order in which VF2 runs significantly faster on practical inputs.
    1.79 +
    1.80 +Meanwhile, another variant called \textbf{VF2 Plus}\cite{VF2Plus} has been published. It is considered to be as efficient as the RI algorithm and has a strictly better behavior on large graphs.  The main idea of VF2 Plus is to precompute a heuristic node order of the small graph, in which the VF2 works more efficiently.\newline
    1.81 +\newline
    1.82 +
    1.83 +\section{Problem Statement}
    1.84 +This section provides a detailed description of the problems to be solved.
    1.85 +\subsection{Definitions}
    1.86 +
    1.87 +Throughout the paper $G_{small}=(V_{small}, E_{small})$ and $G_{large}=(V_{large}, E_{large})$ denote two undirected graphs.
    1.88 +\begin{definition}\label{sec:ismorphic}
    1.89 +$G_{small}$ and $G_{large}$ are \textbf{isomorphic} if $\exists M: V_{small} \longrightarrow V_{large}$ bijection, for which the following is true:
    1.90 +\begin{center}
    1.91 +$\forall u,v\in{V_{small}} : (u,v)\in{E_{small}} \Leftrightarrow (M(u),M(v))\in{E_{large}}$
    1.92 +\end{center}
    1.93 +\end{definition}
    1.94 +For the sake of simplicity in this paper subgraphs and induced subgraphs are defined in a more general way than usual:
    1.95 +\begin{definition}
    1.96 +$G_{small}$ is a \textbf{subgraph} of $G_{large}$ if $\exists I: V_{small}\longrightarrow V_{large}$ injection, for which the following is true:
    1.97 +\begin{center}
    1.98 +$\forall u,v \in{V_{small}} : (u,v)\in{E_{small}} \Rightarrow (I(u),I(v))\in E_{large}$
    1.99 +\end{center}
   1.100 +\end{definition}
   1.101 +
   1.102 +\begin{definition} 
   1.103 +$G_{small}$ is an \textbf{induced subgraph} of $G_{large}$ if $\exists I: V_{small}\longrightarrow V_{large}$ injection, for which the following is true:
   1.104 +\begin{center}
   1.105 +$\forall u,v \in{V_{small}} : (u,v)\in{E_{small}} \Leftrightarrow (I(u),I(v))\in E_{large}$
   1.106 +\end{center}
   1.107 +\end{definition}
   1.108 +
   1.109 +\begin{definition}
   1.110 +$lab: (V_{small}\cup V_{large}) \longrightarrow K$ is a \textbf{node label function}, where K is an arbitrary set. The elements in K are the \textbf{node labels}. Two nodes, u and v are said to be \textbf{equivalent}, if $lab(u)=lab(v)$.
   1.111 +\end{definition}
   1.112 +
   1.113 +When node labels are also given, the matched nodes must have the same labels.
   1.114 +For example, the node labeled isomorphism is phrased by
   1.115 +\begin{definition}
   1.116 +$G_{small}$ and $G_{large}$ are \textbf{isomorphic by the node label function lab} if $\exists M: V_{small} \longrightarrow V_{large}$ bijection, for which the following is true:
   1.117 +\begin{center}
   1.118 +$(\forall u,v\in{V_{small}} : (u,v)\in{E_{small}} \Leftrightarrow (M(u),M(v))\in{E_{large}})$
   1.119 + and $(\forall u\in{V_{small}} : lab(u)=lab(M(u)))$
   1.120 +\end{center}
   1.121 +\end{definition}
   1.122 +
   1.123 +The other two definitions can be extended in the same way.
   1.124 +
   1.125 +Note that edge label function can be defined similarly to node label function, and all the definitions can be extended with additional conditions, but it is out of the scope of this work.
   1.126 +
   1.127 +The equivalence of two nodes is usually defined by another relation, $\\R\subseteq (V_{small}\cup V_{large})^2$. This overlaps with the definition given above if R is an equivalence relation, which does not mean restriction in biological and chemical applications.
   1.128 +
   1.129 +\subsection{Common problems}\label{sec:CommProb}
   1.130 +
   1.131 +The focus of this paper is on two extensively studied topics, the subgraph isomorphism and its variations. However, the following problems also appear in many applications.
   1.132 +
   1.133 +The \textbf{subgraph matching problem} is the following: is $G_{small}$ isomorphic to any subgraph of $G_{large}$ by a given node label?
   1.134 +
   1.135 +The \textbf{induced subgraph matching problem} asks the same about the existence of an induced subgraph.
   1.136 +
   1.137 +The \textbf{graph isomorphism problem} can be defined as induced subgraph matching problem where the sizes of the two graphs are equal.
   1.138 +
   1.139 +In addition to existence, it may be needed to show such a subgraph, or it may be necessary to list all of them.
   1.140 +
   1.141 +It should be noted that some authors misleadingly refer to the term \emph{subgraph isomorphism problem} as an \emph{induced subgraph isomorphism problem}.
   1.142 +
   1.143 +The following sections give the descriptions of VF2, VF2++, VF2 Plus and a particular comparison.
   1.144 +
   1.145 +\section{The VF2 Algorithm}
   1.146 +This algorithm is the basis of both the VF2++ and the VF2 Plus.
   1.147 +VF2 is able to handle all the variations mentioned in \textbf{Section \ref{sec:CommProb})}.
   1.148 +Although it can also handle directed graphs, for the sake of simplicity, only the undirected case will be discussed.
   1.149 +
   1.150 +
   1.151 +\subsection{Common notations}
   1.152 +\indent
   1.153 +Assume $G_{small}$ is searched in $G_{large}$.
   1.154 +The following definitions and notations will be used throughout the whole paper.
   1.155 +\begin{definition}
   1.156 +A set $M\subseteq V_{small}\times V_{large}$ is called \textbf{mapping}, if no node of $V_{small}$ or of $V_{large}$ appears in more than one pair in M.
   1.157 +That is, M uniquely associates some of the nodes in $V_{small}$ with some nodes of $V_{large}$ and vice versa.
   1.158 +\end{definition}
   1.159 +
   1.160 +\begin{definition}
   1.161 +Mapping M \textbf{covers} a node v, if there exists a pair in M, which contains v.
   1.162 +\end{definition}
   1.163 +
   1.164 +\begin{definition}
   1.165 +A mapping $M$ is $\mathbf{whole\ mapping}$, if $M$ covers all the nodes in $V_{small}$.
   1.166 +\end{definition}
   1.167 +
   1.168 +\begin{notation}
   1.169 +Let $\mathbf{M_{small}(s)} := \{u\in V_{small} : \exists v\in V_{large}: (u,v)\in M(s)\}$ and $\mathbf{M_{large}(s)} := \{v\in V_{large} : \exists u\in V_{small}: (u,v)\in M(s)\}$.
   1.170 +\end{notation}
   1.171 +
   1.172 +\begin{notation}
   1.173 +Let $\mathbf{Pair(M,v)}$ be the pair of $v$ in $M$, if such a node exist, otherwise $\mathbf{Pair(M,v)}$ is undefined. For a mapping $M$ and $v\in V_{small}\cup V_{large}$.
   1.174 +\end{notation}
   1.175 +
   1.176 +Note that if $\mathbf{Pair(M,v)}$ exists, then it is unique
   1.177 +
   1.178 +The definitions of the isomorphism types can be rephrased on the existence of a special whole mapping $M$, since it represents a bijection. For example 
   1.179 +\begin{center}
   1.180 +$M\subseteq V_{small}\times V_{large}$ represents an induced subgraph isomorphism $\Leftrightarrow$ $M$ is whole mapping and $\forall u,v \in{V_{small}} : (u,v)\in{E_{small}} \Leftrightarrow (Pair(M,u),Pair(M,v))\in E_{large}$.
   1.181 +\end{center}
   1.182 +
   1.183 +\begin{definition}
   1.184 +A set of whole mappings is called \textbf{problem type}.
   1.185 +\end{definition}
   1.186 +Throughout the paper, $\mathbf{PT}$ denotes a generic problem type which can be substituted by any problem type.
   1.187 +
   1.188 +A whole mapping $W\mathbf{\ is\ of\ type\ PT}$, if $W\in PT$. Using this notations, VF2 searches a whole mapping $W$ of type $PT$.
   1.189 +
   1.190 +For example the problem type of graph isomorphism problem is the following.
   1.191 +A whole mapping $W$ is in $\mathbf{ISO}$, iff the bijection represented by $W$ satisfies \textbf{Definition \ref{sec:ismorphic})}.
   1.192 +The subgraph- and induced subgraph matching problems can be formalized in a similar way. Let their problem types be denoted as $\mathbf{SUB}$ and $\mathbf{IND}$.
   1.193 +
   1.194 +\begin{definition}
   1.195 +\label{expPT}
   1.196 +$PT$ is an \textbf{expanding problem type} if $\ \forall\ W\in PT:\ \forall u_1,u_2\in V_{small}:\ (u_1,u_2)\in E_{small}\Rightarrow (Pair(W,u_1),Pair(W,u_2))\in E_{large}$, that is each edge of $G_{small}$ has to be mapped to an edge of $G_{large}$ for each mapping in $PT$.
   1.197 +\end{definition}
   1.198 +
   1.199 +Note that $ISO$, $SUB$ and $IND$ are expanding problem types.
   1.200 +
   1.201 +This paper deals with the three problem types mentioned above only, but
   1.202 +the following generic definitions make it possible to handle other types as well.
   1.203 +Although it may be challenging to find a proper consistency function and an efficient
   1.204 +cutting function.
   1.205 +
   1.206 +\begin{definition}
   1.207 +Let M be a mapping. A logical function $\mathbf{Cons_{PT}}$ is a \textbf{consistency function by } $\mathbf{PT}$, if the following holds. If there exists whole mapping $W$ of type PT for which $M\subseteq W$, then $Cons_{PT}(M)$ is true.
   1.208 +\end{definition}
   1.209 +
   1.210 +\begin{definition} 
   1.211 +Let M be a mapping. A logical function $\mathbf{Cut_{PT}}$ is a \textbf{cutting function by } $\mathbf{PT}$, if the following holds. $\mathbf{Cut_{PT}(M)}$ is false if $M$ can be extended to a whole mapping W of type PT.
   1.212 +\end{definition}
   1.213 +
   1.214 +\begin{definition}
   1.215 +$M$ is said to be \textbf{consistent mapping by} $\mathbf{PT}$, if $Cons_{PT}(M)$ is true. 
   1.216 +\end{definition}
   1.217 +
   1.218 +$Cons_{PT}$ and $Cut_{PT}$ will often be used in the following form.
   1.219 +\begin{notation}
   1.220 +Let $\mathbf{Cons_{PT}(p, M)}:=Cons_{PT}(M\cup\{p\})$ and $\mathbf{Cut_{PT}(p, M)}:=Cut_{PT}(M\cup\{p\})$, where $p\in{V_{small}\!\times\!V_{large}}$ and $M\cup\{p\}$ is mapping.
   1.221 +\end{notation}
   1.222 +
   1.223 +$Cons_{PT}$ will be used to check the consistency of the already covered nodes, while $Cut_{PT}$ is for looking ahead to recognize if no whole consistent mapping can contain the current mapping.
   1.224 +
   1.225 +\subsection{Overview of the algorithm}
   1.226 +VF2 uses a state space representation of mappings, $Cons_{PT}$ for excluding inconsistency with the problem type and $Cut_{PT}$ for pruning the search tree.
   1.227 +Each state $s$ of the matching process can be associated with a mapping $M(s)$.
   1.228 +
   1.229 +\textbf{Algorithm~\ref{alg:VF2Pseu})} is a high level description of the VF2 matching algorithm.
   1.230 +
   1.231 +
   1.232 +\begin{algorithm}
   1.233 +\algtext*{EndIf}%ne nyomtasson end if-et
   1.234 +\algtext*{EndFor}%ne nyomtasson ..
   1.235 +\algtext*{EndProcedure}%ne nyomtasson ..
   1.236 +\caption{\hspace{0.5cm}$A\ high\ level\ description\ of\ VF2$}\label{alg:VF2Pseu}
   1.237 +\begin{algorithmic}[1]
   1.238 +
   1.239 +\Procedure{VF2}{State $s$, ProblemType $PT$}
   1.240 +  \If{$M(s$) covers $V_{small}$}
   1.241 +    \State Output($M(s)$)
   1.242 +  \Else
   1.243 +  
   1.244 +  \State Compute the set $P(s)$ of the pairs candidate for inclusion in $M(s)$
   1.245 +  \ForAll{$p\in{P(s)}$}
   1.246 +    \If{Cons$_{PT}$($p, M(s)$) $\wedge$ $\neg$Cut$_{PT}$($p, M(s)$)}
   1.247 +      \State Compute the nascent state $\tilde{s}$ by adding $p$ to $M(s)$
   1.248 +      \State \textbf{call} VF2($\tilde{s}$, $PT$)
   1.249 +    \EndIf
   1.250 +  \EndFor
   1.251 +  \EndIf
   1.252 +\EndProcedure
   1.253 +\end{algorithmic}
   1.254 +\end{algorithm}
   1.255 +
   1.256 +
   1.257 +The initial state $s_0$ is associated with $M(s_0)=\emptyset$, i.e. it starts with an empty mapping.
   1.258 +
   1.259 +For each state $s$, the algorithm computes $P(s)$, the set of candidate node pairs for adding to the current state $s$.
   1.260 +
   1.261 +For each pair $p$ in $P(s)$, $Cons_{PT}(p,M(s))$ and $Cut_{PT}(p,M(s))$ are evaluated. If $Cons_{PT}(p,M(s))$ is true and $Cut_{PT}(p,M(s))$ is false, the successor state $\tilde{s}=s\cup \{p\}$ is computed, and the whole process is recursively applied to $\tilde{s}$. Otherwise, $\tilde{s}$ is not consistent by $PT$ or it can be proved that $s$ can not be extended to a whole mapping.
   1.262 +
   1.263 +In order to make sure of the correctness see
   1.264 +\begin{claim}
   1.265 +Through consistent mappings, only consistent whole mappings can be reached, and all of the whole mappings are reachable through consistent mappings.
   1.266 +\end{claim}
   1.267 +
   1.268 +Note that a state may be reached in many different ways, since the order of insertions into M does not influence the nascent mapping. In fact, the number of different ways which lead to the same state can be exponentially large. If $G_{small}$ and $G_{large}$ are circles with n nodes and n different node labels, there exists exactly one graph isomorphism between them, but it will be reached in $n!$ different ways.
   1.269 +
   1.270 +However, one may observe
   1.271 +
   1.272 +\begin{claim}
   1.273 +\label{claim:claimTotOrd}
   1.274 +Let $\prec$ an arbitrary total ordering relation on $V_{small}$.
   1.275 +If the algorithm ignores each $p=(u,v) \in P(s)$, for which
   1.276 +\begin{center}
   1.277 +$\exists (\hat{u},\hat{v})\in P(s): \hat{u} \prec u$,
   1.278 +\end{center}
   1.279 +then no state can be reached more than ones and each state associated with a whole mapping remains reachable. 
   1.280 +\end{claim}
   1.281 +
   1.282 +Note that the cornerstone of the improvements to VF2 is a proper choice of a total ordering.
   1.283 +
   1.284 +\subsection{The candidate set P(s)}
   1.285 +\label{candidateComputingVF2}
   1.286 +$P(s)$ is the set of the candidate pairs for inclusion in $M(s)$.
   1.287 +Suppose that $PT$ is an expanding problem type, see \textbf{Definition~\ref{expPT})}.
   1.288 +
   1.289 +\begin{notation}
   1.290 +Let $\mathbf{T_{small}(s)}:=\{u \in V_{small} : u$ is not covered by $M(s)\wedge\exists \tilde{u}\in{V_{small}: (u,\tilde{u})\in E_{small}} \wedge \tilde{u}$ is covered by $M(s)\}$, and \\ $\mathbf{T_{large}(s)}\!:=\!\{v \in\!V_{large}\!:\!v$ is not covered by $M(s)\wedge\!\exists\tilde{v}\!\in\!{V_{large}\!:\!(v,\tilde{v})\in\!E_{large}} \wedge \tilde{v}$ is covered by $M(s)\}$
   1.291 +\end{notation}
   1.292 +
   1.293 +The set $P(s)$ includes the pairs of uncovered neighbours of covered nodes and if there is not such a node pair, all the pairs containing two uncovered nodes are added. Formally, let
   1.294 +\[
   1.295 + P(s)\!=\!
   1.296 +  \begin{cases} 
   1.297 +   T_{small}(s)\times T_{large}(s)&\hspace{-0.15cm}\text{if } T_{small}(s)\!\neq\!\emptyset\!\wedge\!T_{large}(s)\!\neq \emptyset,\\
   1.298 +   (V_{small}\!\setminus\!M_{small}(s))\!\times\!(V_{large}\!\setminus\!M_{large}(s)) &\hspace{-0.15cm}otherwise.
   1.299 +  \end{cases}
   1.300 +\]
   1.301 +
   1.302 +\subsection{Consistency}
   1.303 +This section defines the consistency functions for the different problem types mentioned in \textbf{Section \ref{sec:CommProb})}.
   1.304 +\begin{notation}
   1.305 +Let $\mathbf{\Gamma_{small} (u)}:=\{\tilde{u}\in V_{small} : (u,\tilde{u})\in E_{small}\}$\\
   1.306 +Let $\mathbf{\Gamma_{large} (v)}:=\{\tilde{v}\in V_{large} : (v,\tilde{v})\in E_{large}\}$
   1.307 +\end{notation}
   1.308 +Suppose $p=(u,v)$, where $u\in V_{small}$ and $v\in V_{large}$,
   1.309 +$s$ is a state of the matching procedure,
   1.310 +$M(s)$ is consistent mapping by $PT$ and $lab(u)=lab(v)$.
   1.311 +$Cons_{PT}(p,M(s))$ checks whether including pair $p$ into $M(s)$ leads to a consistent mapping by $PT$.
   1.312 +
   1.313 +\subsubsection{Induced subgraph isomorphism}
   1.314 +$M(s)\cup \{(u,v)\}$ is a consistent mapping by $IND$ $\Leftrightarrow (\forall \tilde{u}\in M_{small}: (u,\tilde{u})\in E_{small} \Leftrightarrow (v,Pair(M(s),\tilde{u}))\in E_{large})$.\newline
   1.315 +The following formulation gives an efficient way of calculating $Cons_{IND}$.
   1.316 +\begin{claim}
   1.317 +$Cons_{IND}((u,v),M(s)):=(\forall \tilde{v}\in \Gamma_{large}(v) \ \cap\ M_{large}(s):\\(Pair(M(s),\tilde{v}),u)\in E_{small})\wedge 
   1.318 +(\forall \tilde{u}\in \Gamma_{small}(u) \ \cap\ M_{small}(s):(v,Pair(M(s),\tilde{u}))\in E_{large})$ is a consistency function in the case of $IND$.
   1.319 +\end{claim}
   1.320 +
   1.321 +\subsubsection{Graph isomorphism}
   1.322 +$M(s)\cup \{(u,v)\}$ is a consistent mapping by $ISO$ $\Leftrightarrow$  $M(s)\cup \{(u,v)\}$ is a consistent mapping by $IND$.
   1.323 +\begin{claim}
   1.324 +$Cons_{ISO}((u,v),M(s))$ is a consistency function by $ISO$ if and only if it is a consistency function by $IND$.
   1.325 +\end{claim}
   1.326 +\subsubsection{Subgraph isomorphism}
   1.327 +$M(s)\cup \{(u,v)\}$ is a consistent mapping by $SUB$ $\Leftrightarrow (\forall \tilde{u}\in M_{small}:\\(u,\tilde{u})\in E_{small} \Rightarrow (v,Pair(M(s),\tilde{u}))\in E_{large})$.
   1.328 +\newline
   1.329 +The following formulation gives an efficient way of calculating $Cons_{SUB}$.
   1.330 +\begin{claim}
   1.331 +$Cons_{SUB}((u,v),M(s)):=
   1.332 +(\forall \tilde{u}\in \Gamma_{small}(u) \ \cap\ M_{small}(s):\\(v,Pair(M(s),\tilde{u}))\in E_{large})$ is a consistency function by $SUB$.
   1.333 +\end{claim}
   1.334 +
   1.335 +\subsection{Cutting rules}
   1.336 +$Cut_{PT}(p,M(s))$ is defined by a collection of efficiently verifiable conditions. The requirement is that $Cut_{PT}(p,M(s))$ can be true only if it is impossible to extended $M(s)\cup \{p\}$ to a whole mapping.
   1.337 +\begin{notation}
   1.338 +
   1.339 +Let $\mathbf{\tilde{T}_{small}}(s):=(V_{small}\backslash M_{small}(s))\backslash T_{small}(s)$, and \\ $\mathbf{\tilde{T}_{large}}(s):=(V_{large}\backslash M_{large}(s))\backslash T_{large}(s)$.
   1.340 +\end{notation}
   1.341 +\subsubsection{Induced subgraph isomorphism}
   1.342 +\begin{claim}
   1.343 +$Cut_{IND}((u,v),M(s)):= |\Gamma_{large} (v)\ \cap\ T_{large}(s)| < |\Gamma_{small} (u)\ \cap\ T_{small}(s)| \vee |\Gamma_{large}(v)\cap \tilde{T}_{large}(s)| < |\Gamma_{small}(u)\cap \tilde{T}_{small}(s)|$ is a cutting function by $IND$.
   1.344 +\end{claim}
   1.345 +\subsubsection{Graph isomorphism}
   1.346 +Note that the cutting function of induced subgraph isomorphism defined above is a cutting function by $ISO$, too, however it is less efficient than the following while their computational complexity is the same.
   1.347 +\begin{claim}
   1.348 +$Cut_{ISO}((u,v),M(s)):= |\Gamma_{large} (v)\ \cap\ T_{large}(s)| \neq |\Gamma_{small} (u)\ \cap\ T_{small}(s)| \vee |\Gamma_{large}(v)\cap  \tilde{T}_{large}(s)| \neq |\Gamma_{small}(u)\cap \tilde{T}_{small}(s)|$ is a cutting function by $ISO$.
   1.349 +\end{claim}
   1.350 +
   1.351 +\subsubsection{Subgraph isomorphism}
   1.352 +\begin{claim}
   1.353 +$Cut_{SUB}((u,v),M(s)):= |\Gamma_{large} (v)\ \cap\ T_{large}(s)| < |\Gamma_{small} (u)\ \cap\ T_{small}(s)|$ is a cutting function by $SUB$.
   1.354 +\end{claim}
   1.355 +Note that there is a significant difference between induced and non-induced subgraph isomorphism:
   1.356 +
   1.357 +\begin{claim}
   1.358 +\label{claimSUB}
   1.359 +$Cut_{SUB}'((u,v),M(s)):= |\Gamma_{large} (v)\ \cap\ T_{large}(s)| < |\Gamma_{small} (u)\ \cap\ T_{small}(s)| \vee |\Gamma_{large}(v)\cap \tilde{T}_{large}(s)| < |\Gamma_{small}(u)\cap \tilde{T}_{small}(s)|$ is \textbf{not} a cutting function by $SUB$.
   1.360 +\end{claim}
   1.361 +\begin{proof}$ $\\
   1.362 +\vspace*{-0.5cm}
   1.363 +
   1.364 +\begin{figure}
   1.365 +\begin{center}
   1.366 +\begin{tikzpicture}
   1.367 +  [scale=.8,auto=left,every node/.style={circle,fill=black!15}]
   1.368 +  \node[rectangle,fill=black!15] at (4,6) {$G_{small}$};
   1.369 +  \node (u4) at (2.5,10)  {$u_4$};
   1.370 +  \node (u3) at (5.5,10) {$u_3$};
   1.371 +  \node (u1) at (2.5,7)  {$u_1$};
   1.372 +  \node (u2) at (5.5,7) {$u_2$};
   1.373 +  
   1.374 +  \node[rectangle,fill=black!30] at (13.5,6) {$G_{large}$};
   1.375 +  \node[fill=black!30] (v4) at (12,10)  {$v_4$};
   1.376 +  \node[fill=black!30] (v3) at (15,10) {$v_3$};
   1.377 +  \node[fill=black!30] (v1) at (12,7)  {$v_1$};
   1.378 +  \node[fill=black!30] (v2) at (15,7) {$v_2$};
   1.379 +  
   1.380 +
   1.381 +  \foreach \from/\to in {u1/u2,u2/u3,u3/u4,u4/u1}
   1.382 +    \draw (\from) -- (\to);
   1.383 +  \foreach \from/\to in {v1/v2,v2/v3,v3/v4,v4/v1,v1/v3}
   1.384 +    \draw (\from) -- (\to);
   1.385 +%    \draw[dashed] (\from) -- (\to);
   1.386 +\end{tikzpicture}
   1.387 +\caption{Graphs for the proof of \textbf{Claim \ref{claimSUB}}} \label{fig:proofSUB}
   1.388 +\end{center}
   1.389 +\end{figure}
   1.390 +Let the two graphs of \textbf{Figure \ref{fig:proofSUB})} be the input graphs.
   1.391 +Suppose the total ordering relation is $u_1 \prec u_2 \prec u_3 \prec u_4$,$M(s)\!=\{(u_1,v_1)\}$, and VF2 tries to add $(u_2,v_2)\in P(s)$.\newline
   1.392 +$Cons_{SUB}((u_2,v_2),M(s))=true$, so $M\cup \{(u_2,v_2)\}$ is consistent by $SUB$. The cutting function $Cut_{SUB}((u_2,v_2),M(s))$ is false, so it does not let cut the tree.\newline
   1.393 +On the other hand $Cut_{SUB}'((u_2,v_2),M(s))$ is true, since\\$0=|\Gamma_{large}(v_2)\cap \tilde{T}_{large}(s)|<|\Gamma_{small}(u_2)\cap \tilde{T}_{small}(s)|=1$ is true, but still the tree can not be pruned, because otherwise the $\{(u_1,v_1)(u_2,v_2)(u_3,v_3)(u_4,v_4)\}$ mapping can not be found.
   1.394 +\end{proof}
   1.395 +
   1.396 +\newpage
   1.397 +\section{The VF2++ Algorithm}
   1.398 +Although any total ordering relation makes the search space of VF2 a tree, its
   1.399 +choice turns out to dramatically influence the number of visited states. The goal is to determine an efficient one as quickly as possible.
   1.400 +
   1.401 +The main reason for VF2++' superiority over VF2 is twofold. Firstly, taking into account the structure and the node labeling of the graph, VF2++ determines a state order in which most of the unfruitful branches of the search space can be pruned immediately. Secondly, introducing more efficient --- nevertheless still easier to compute --- cutting rules reduces the chance of going astray even further.
   1.402 +
   1.403 +In addition to the usual subgraph isomorphism, specialized versions for induced subgraph isomorphism and for graph isomorphism have been designed. VF2++ has gained a runtime improvement of one order of magnitude respecting induced subgraph isomorphism and a better asymptotical behaviour in the case of graph isomorphism problem.
   1.404 +
   1.405 +Note that a weaker version of the cutting rules and the more efficient candidate
   1.406 +set calculating were described in \cite{VF2Plus}, too.
   1.407 +
   1.408 +It should be noted that all the methods described in this section are extendable to handle directed graphs and edge labels as well.
   1.409 +
   1.410 +The basic ideas and the detailed description of VF2++ are provided in the following.
   1.411 +
   1.412 +\subsection{Preparations}
   1.413 +\begin{claim}
   1.414 +\label{claim:claimCoverFromLeft}
   1.415 +The total ordering relation uniquely determines a node order, in which the nodes of $V_{small}$ will be covered by VF2. From the point of view of the matching procedure, this means, that always the same node of $G_{small}$ will be covered on the d-th level.
   1.416 +\end{claim}
   1.417 +\begin{proof}
   1.418 +In order to make the search space a tree, the pairs in $\{(u,v)\in P(s) : \exists \hat{u} : \hat{u}\prec u\}$ are excluded from $P(s)$.
   1.419 +\newline
   1.420 +Let $\tilde{P}(s):=P(s)\backslash \{(u,v)\in P(s) : \exists \hat{u} : \hat{u}\prec u\}$
   1.421 +\newline
   1.422 +The relation $\prec$ is a total ordering, so $\exists!\ \tilde{u} : \forall\ (u,v)\in \tilde{P}(s): u=\tilde u$. Since a pair form $\tilde{P}(s)$ is chosen for including into $M(s)$, it is obvious, that only $\tilde{u}$ can be covered in $V_{small}$. Actually, $\tilde{u}$ is the smallest element in $T_{small}(s)$ (or in $V_{small}\backslash M_{small}(s)$, if $T_{small}(s)$ were empty), and $T_{small}(s)$ depends only on the covered nodes of $G_{small}$. 
   1.423 +\newline
   1.424 +Simple induction on $d$ shows that the set of covered nodes of $G_{small}$ is unique, if $d$ is given, so $\tilde{u}$ is unique if $d$ is given.
   1.425 +\end{proof}
   1.426 +
   1.427 +\begin{definition}
   1.428 +An order $(u_{\sigma(1)},u_{\sigma(2)},..,u_{\sigma(|V_{small}|)})$ of $V_{small}$ is \textbf{matching order}, if exists $\prec$ total ordering relation, s.t. the VF2 with $\prec$ on the d-th level finds pair for $u_{\sigma(d)}$ for all $d\in\{1,..,|V_{small}|\}$.
   1.429 +\end{definition}
   1.430 +
   1.431 +\begin{claim}\label{claim:MOclaim}
   1.432 +A total ordering is matching order, iff the nodes of every component form an interval in the node sequence, and every node connects to a previous node in its component except the first node of the component. The order of the components is arbitrary.
   1.433 +\\Formally spoken, an order $(u_{\sigma(1)},u_{\sigma(2)},..,u_{\sigma(|V_{small}|)})$ of $V_{small}$ is matching order $\Leftrightarrow$ $\forall G'_{small}=(V'_{small},E'_{small})\ component\ of\ G_{small}: \forall i: (\exists j : j<i\wedge u_{\sigma(j)},u_{\sigma(i)}\in V'_{small})\Rightarrow \exists k : k < i \wedge (\forall l: k\leq l\leq i \Rightarrow u_{l}\in V'_{small}) \wedge (u_{\sigma{(k)}},u_{\sigma{(i)}})\in E'_{small}$, where $i,j,k,l\in \{1,..,|V_{small}|\}$\newline
   1.434 +\end{claim}
   1.435 +\begin{proof}
   1.436 +Suppose a matching order is given. It has to be shown that the node sequence has a structure described above.\\
   1.437 +Let $G'_{small}=(V'_{small},E'_{small})$ be an arbitrary component and $i$ an arbitrary index. 
   1.438 +\newline
   1.439 +$(\exists j : j<i\wedge u_{\sigma(j)},u_{\sigma(i)}\in V'_{small})\Rightarrow u_{\sigma(i)}$ is not the first covered node in $G_{small}\ \Rightarrow u_{\sigma(i)}$ is connected to a covered node $u_{\sigma(k)}$ where $k<i$, since $u_{\sigma(i)}\in T_{small}(s)$ for some $s$ and $T_{small}(s)\subseteq{V'_{small}}$ contains only nodes connected to at least one covered node. It's easy to see, that $\forall l: k\leq l\leq i \Rightarrow u_{l}\in V'_{small}$, since $T_{small}(s)$ contains only nodes connected to some covered ones, while it is not empty, but if it were empty, then $u_{\sigma(k)}$ and $u_{\sigma(i)}$ were not in the same component.
   1.440 +
   1.441 +Now, let us show that if a node sequence has the special structure described above, it has to be matching order.\\ $(u_{\sigma(1)},u_{\sigma(2)},..,u_{\sigma(|V_{small}|)})$ is a total ordering, which determines the matching order $(u_{\sigma(1)},u_{\sigma(2)},..,u_{\sigma(|V_{small}|)})$.
   1.442 +\end{proof}
   1.443 +
   1.444 +To summing up, a total ordering always uniquely determines a matching order, and every matching order can be determined by a total ordering, however, more than one different total orderings may determine the same matching order.
   1.445 +\subsection{Idea behind the algorithm}
   1.446 +The goal is to find a matching order in which the algorithm is able to recognize inconsistency or prune the infeasible branches on the highest levels and goes deep only if it is needed.
   1.447 +
   1.448 +\begin{notation}
   1.449 +Let $\mathbf{Conn_{H}(u)}:=|\Gamma_{small}(u)\cap H\}|$, that is the number of neighbours of u which are in H, where $u\in V_{small} $ and $H\subseteq V_{small}$.
   1.450 +\end{notation}
   1.451 +
   1.452 +The principal question is the following. Suppose a state $s$ is given. For which node of $T_{small}(s)$ is the hardest to find a consistent pair in $G_{large}$? The more covered neighbours a node in $T_{small}(s)$ has --- i.e. the largest $Conn_{M_{small}(s)}$ it has ---, the more rarely satisfiable consistency constraints for its pair are given.
   1.453 +
   1.454 +In biology, most of the graphs are sparse, thus several nodes in $T_{small}(s)$ may have the same $Conn_{M_{small}(s)}$, which makes reasonable to define a secondary and a tertiary order between them.
   1.455 +The observation above proves itself to be as determining, that the secondary ordering prefers nodes with the most uncovered neighbours among which have the same $Conn_{M_{small}(s)}$ to increase $Conn_{M_{small}(s)}$ of uncovered nodes so much, as possible.
   1.456 +The tertiary ordering prefers nodes having the rarest uncovered labels.
   1.457 +
   1.458 +Note that the secondary ordering is the same as the ordering by $deg$, which is a static data in front of the above used.
   1.459 +
   1.460 +These rules can easily result in a matching order which contains the nodes of a long path successively, whose nodes may have low $Conn$ and is easily matchable into $G_{large}$. To avoid that, a BFS order is used, which provides the shortest possible paths.
   1.461 +\newline
   1.462 +
   1.463 +In the following, some examples on which the VF2 may be slow are described, although they are easily solvable by using a proper matching order.
   1.464 +
   1.465 +\begin{example}
   1.466 +Suppose $G_{small}$ can be mapped into $G_{large}$ in many ways without node labels. Let $u\in V_{small}$ and $v\in V_{large}$.
   1.467 +\newline
   1.468 +$lab(u):=black$
   1.469 +\newline
   1.470 +$lab(v):=black$
   1.471 +\newline
   1.472 +$lab(\tilde{u}):=red \  \forall \tilde{u}\in (V_{small}\backslash \{u\})$
   1.473 +\newline
   1.474 +$lab(\tilde{v}):=red \  \forall \tilde{v}\in (V_{large}\backslash \{v\})$
   1.475 +\newline
   1.476 +
   1.477 +Now, any mapping by the node label $lab$ must contain $(u,v)$, since $u$ is black and no node in $V_{large}$ has a black label except $v$. If unfortunately $u$ were the last node which will get covered, VF2 would check only in the last steps, whether $u$ can be matched to $v$.
   1.478 +\newline
   1.479 +However, had $u$ been the first matched node, u would have been matched immediately to v, so all the mappings would have been precluded in which node labels can not correspond.
   1.480 +\end{example}
   1.481 +
   1.482 +\begin{example}
   1.483 +Suppose there is no node label given, $G_{small}$ is a small graph and can not be mapped into $G_{large}$ and $u\in V_{small}$.
   1.484 +\newline
   1.485 +Let $G'_{small}:=(V_{small}\cup \{u'_{1},u'_{2},..,u'_{k}\},E_{small}\cup \{(u,u'_{1}),(u'_{1},u'_{2}),..,(u'_{k-1},u'_{k})\})$, that is, $G'_{small}$ is $G_{small}\cup \{ a\ k$ long path, which is disjoint from $G_{small}$ and one of its starting points is connected to $u\in V_{small}\}$.
   1.486 +\newline
   1.487 +Is there a subgraph of $G_{large}$, which is isomorph with $G'_{small}$?
   1.488 +\newline
   1.489 +If unfortunately the nodes of the path were the first $k$ nodes in the matching order, the algorithm would iterate through all the possible k long paths in $G_{large}$, and it would recognize that no path can be extended to $G'_{small}$.
   1.490 +\newline
   1.491 +However, had it started by the matching of $G_{small}$, it would not have matched any nodes of the path.
   1.492 +\end{example}
   1.493 +
   1.494 +These examples may look artificial, but the same problems also appear in real-world examples, even though in a less obvious way.
   1.495 +
   1.496 +\subsection{Total ordering}
   1.497 +Instead of the total ordering relation, the matching order will be searched directly.
   1.498 +\begin{notation}
   1.499 +Let \textbf{F$_\mathcal{M}$(l)}$:=|\{v\in V_{large} : l=lab(v)\}|-|\{u\in V_{small}\backslash \mathcal{M} : l=lab(u)\}|$ , where $l$ is a label and $\mathcal{M}\subseteq V_{small}$.
   1.500 +\end{notation}
   1.501 +
   1.502 +\begin{definition}Let $\mathbf{arg\ max}_{f}(S) :=\{u : u\in S \wedge f(u)=max_{v\in S}\{f(v)\}\}$ and $\mathbf{arg\ min}_{f}(S) := arg\ max_{-f}(S)$, where $S$ is a finite set and $f:S\longrightarrow \mathbb{R}$.
   1.503 +\end{definition}
   1.504 +
   1.505 +\begin{algorithm}
   1.506 +\algtext*{EndIf}%ne nyomtasson end if-et
   1.507 +\algtext*{EndFor}%ne nyomtasson ..
   1.508 +\algtext*{EndProcedure}%ne nyomtasson ..
   1.509 +\algtext*{EndWhile}
   1.510 +\caption{\hspace{0.5cm}$The\ method\ of\ VF2++\ for\ determining\ the\ node\ order$}\label{alg:VF2PPPseu}
   1.511 +\begin{algorithmic}[1]
   1.512 +\Procedure{VF2++order}{}
   1.513 +  \State $\mathcal{M}$ := $\emptyset$ \Comment{matching order}
   1.514 +  \While{$V_{small}\backslash \mathcal{M} \neq\emptyset$}
   1.515 +  \State $r\in$ arg max$_{deg}$ (arg min$_{F_\mathcal{M}\circ lab}(V_{small}\backslash \mathcal{M})$)\label{alg:findMin}
   1.516 +  \State Compute $T$, a BFS tree with root node $r$.
   1.517 +  \For{$d=0,1,...,depth(T)$}
   1.518 +  \State $V_d$:=nodes of the $d$-th level
   1.519 +  \State Process $V_d$ \Comment{See Algorithm \ref{alg:VF2PPProcess1}) and \ref{alg:VF2PPProcess2})}
   1.520 +  \EndFor
   1.521 +  \EndWhile
   1.522 +\EndProcedure
   1.523 +\end{algorithmic}
   1.524 +\end{algorithm}
   1.525 +
   1.526 +\begin{algorithm}
   1.527 +\algtext*{EndIf}%ne nyomtasson end if-et
   1.528 +\algtext*{EndFor}%ne nyomtasson ..
   1.529 +\algtext*{EndProcedure}%ne nyomtasson ..
   1.530 +\algtext*{EndWhile}
   1.531 +\caption{\hspace{.5cm}$A\ method\ for\ processing\ a\ level\ of\ the\ BFS\ tree$}\label{alg:VF2PPProcess1}
   1.532 +\begin{algorithmic}[1]
   1.533 +\Procedure{VF2++ProcessLevel1}{$V_{d}$}
   1.534 +  \While{$V_d\neq\emptyset$}
   1.535 +  \State $m\in$ arg min$_{F_\mathcal{M}\circ\ lab}($ arg max$_{deg}($arg max$_{Conn_{\mathcal{M}}}(V_{d})))$
   1.536 +  \State $V_d:=V_d\backslash m$
   1.537 +  \State Append node $m$ to the end of $\mathcal{M}$
   1.538 +  \State Refresh $F_\mathcal{M}$
   1.539 +  \EndWhile
   1.540 +\EndProcedure
   1.541 +\end{algorithmic}
   1.542 +\end{algorithm}
   1.543 +
   1.544 +\begin{algorithm}
   1.545 +\algtext*{EndIf}%ne nyomtasson end if-et
   1.546 +\algtext*{EndFor}%ne nyomtasson ..
   1.547 +\algtext*{EndProcedure}%ne nyomtasson ..
   1.548 +\algtext*{EndWhile}
   1.549 +\caption{\hspace{0.5cm}$Another\ method\ for\ processing\ a\ level\ of\ the\ BFS\ tree$}\label{alg:VF2PPProcess2}
   1.550 +\begin{algorithmic}[1]
   1.551 +\Procedure{VF2++ProcessLevel2}{$V_{d}$}
   1.552 +  \State Sort $V_d$ in descending lex. order by $(Conn_{\mathcal{M}},deg,-F_\mathcal{M})$
   1.553 +  \State Append the sorted $V_d$ to the end of $\mathcal{M}$
   1.554 +  \State Refresh $F_\mathcal{M}$
   1.555 +\EndProcedure
   1.556 +\end{algorithmic}
   1.557 +\end{algorithm}
   1.558 +\textbf{Algorithm~\ref{alg:VF2PPPseu})} is a high level description of the matching order procedure of VF2++. It computes a BFS tree for each component in ascending order of their rarest $lab$ and largest $deg$, whose root vertex is the component's minimal node. \textbf{Algorithm \ref{alg:VF2PPProcess1})} and \textbf{\ref{alg:VF2PPProcess2})} are two different methods to process a level of the BFS tree.
   1.559 +
   1.560 +After sorting the nodes of the current level in descending lexicographic order by $(Conn_{\mathcal{M}},deg,-F_\mathcal{M})$, \textbf{Algorithm \ref{alg:VF2PPProcess2})} appends them simultaneously to the matching order $\mathcal{M}$ and refreshes $F_\mathcal{M}$ just once, whereas \textbf{Algorithm \ref{alg:VF2PPProcess1})} appends the nodes separately to $\mathcal{M}$ and refreshes $F_\mathcal{M}$ immediately, so it provides up-to-date label information and may result in a more efficient matching order.
   1.561 +
   1.562 +\textbf{Claim~\ref{claim:MOclaim})} shows that \textbf{Algorithm \ref{alg:VF2PPPseu})} provides a matching order.
   1.563 +
   1.564 +
   1.565 +\subsection{Cutting rules}
   1.566 +\label{VF2PPCuttingRules}
   1.567 +This section presents the cutting rules of VF2++, which are improved by using extra information coming from the node labels.
   1.568 +\begin{notation}
   1.569 +Let $\mathbf{\Gamma_{small}^{l}(u)}:=\{\tilde{u} : lab(\tilde{u})=l \wedge \tilde{u}\in \Gamma_{small} (u)\}$ and $\mathbf{\Gamma_{large}^{l}(v)}:=\{\tilde{v} : lab(\tilde{v})=l \wedge \tilde{v}\in \Gamma_{large} (v)\}$, where $u\in V_{small}$, $v\in V_{large}$ and $l$ is a label.
   1.570 +\end{notation}
   1.571 +
   1.572 +\subsubsection{Induced subgraph isomorphism}
   1.573 +\begin{claim}
   1.574 +\[LabCut_{IND}((u,v),M(s))\!:=\!\!\!\!\!\bigvee_{l\ is\ label}\!\!\!\!\!\!\!|\Gamma_{large}^{l} (v) \cap T_{large}(s)|\!<\!|\Gamma_{small}^{l}(u)\cap T_{small}(s)|\ \vee\]\[\bigvee_{l\ is\ label} \newline |\Gamma_{large}^{l}(v)\cap \tilde{T}_{large}(s)| < |\Gamma_{small}^{l}(u)\cap \tilde{T}_{small}(s)|\] is a cutting function by IND.
   1.575 +\end{claim}
   1.576 +\begin{proof}
   1.577 +It has to be shown, that $LabCut_{IND}((u,v),M(s))=true\Rightarrow$ the mapping can not be extended to a whole mapping.\\
   1.578 +$LabCut_{IND}((u,v),M(s))=true,$ iff the following holds. $\\\exists l: |\Gamma_{large}^{l} (v)\ \cap\ T_{large}(s)| < |\Gamma_{small}^{l} (u)\ \cap\ T_{small}(s)| \vee |\Gamma_{large}^{l}(v)\cap \tilde{T}_{large}(s)| < |\Gamma_{small}^{l}(u)\cap \tilde{T}_{small}(s)|$.
   1.579 +
   1.580 +Suppose that $|\Gamma_{large}^{l} (v)\ \cap\ T_{large}(s)| < |\Gamma_{small}^{l} (u)\ \cap\ T_{small}(s)|$. Each node of $\Gamma_{small}^{l} (u)\ \cap\ T_{small}(s)$ has to be matched to a node in $\Gamma_{large}^{l} (v)\ \cap\ T_{large}(s)$, so $\Gamma_{large}^{l} (v)\ \cap\ T_{large}(s)$ can not be smaller than $\Gamma_{small}^{l} (u)\ \cap\ T_{small}(s)$. That is why $M(s)$ can not be extended to a whole mapping.
   1.581 +
   1.582 +Otherwise $|\Gamma_{large}^{l}(v)\cap \tilde{T}_{large}(s)| < |\Gamma_{small}^{l}(u)\cap \tilde{T}_{small}(s)|$ has to be true. Similarly, each node of $\Gamma_{large}^{l}(v)\cap \tilde{T}_{large}(s)$ has to be matched to a node in $\Gamma_{small}^{l}(u)\cap \tilde{T}_{small}(s)$, i.e. $\Gamma_{large}^{l}(v)\cap \tilde{T}_{large}(s)$ can not be smaller than $\Gamma_{small}^{l}(u)\cap \tilde{T}_{small}(s)$, if $M(s)$ is extendible.
   1.583 +\end{proof}
   1.584 +The following claims can be proven similarly.
   1.585 +\subsubsection{Graph isomorphism}
   1.586 +\begin{claim}
   1.587 +\[LabCut_{ISO}((u,v),M(s))\!:=\!\!\!\!\!\bigvee_{l\ is\ label}\!\!\!\!\!\!\!|\Gamma_{large}^{l} (v) \cap T_{large}(s)|\!\neq\!|\Gamma_{small}^{l}(u)\cap T_{small}(s)|\  \vee\]\[\bigvee_{l\ is\ label} \newline |\Gamma_{large}^{l}(v)\cap \tilde{T}_{large}(s)| \neq |\Gamma_{small}^{l}(u)\cap \tilde{T}_{small}(s)|\] is a cutting function by ISO.
   1.588 +\end{claim}
   1.589 +
   1.590 +\subsubsection{Subgraph isomorphism}
   1.591 +\begin{claim}
   1.592 +\[LabCut_{IND}((u,v),M(s))\!:=\!\!\!\!\!\bigvee_{l\ is\ label}\!\!\!\!\!\!\!|\Gamma_{large}^{l} (v) \cap T_{large}(s)|\!<\!|\Gamma_{small}^{l}(u)\cap T_{small}(s)|\] is a cutting function by SUB.
   1.593 +\end{claim}
   1.594 +
   1.595 +
   1.596 +
   1.597 +\subsection{Implementation details}
   1.598 +This section provides a detailed summary of an efficient implementation of VF2++.
   1.599 +\subsubsection{Storing a mapping}
   1.600 +After fixing an arbitrary node order ($u_0, u_1, .., u_{|G_{small}|-1}$) of $G_{small}$, an array $M$ is usable to store the current mapping in the following way.
   1.601 +\[
   1.602 + M[i] = 
   1.603 +  \begin{cases} 
   1.604 +   v & if\ (u_i,v)\ is\ in\ the\ mapping\\
   1.605 +   INVALID & if\ no\ node\ has\ been\ mapped\ to\ u_i.
   1.606 +  \end{cases}
   1.607 +\]
   1.608 +Where $i\in\{0,1, ..,|G_{small}|-1\}$, $v\in V_{large}$ and $INVALID$ means "no node".
   1.609 +\subsubsection{Avoiding the recurrence}
   1.610 +Exploring the state space was described in a recursive fashion using sets (see \textbf{Algorithm~\ref{alg:VF2Pseu})}), which makes the algorithm easy to understand but does not show directly an efficient way of implementation. The following approach avoiding recursion and using lookup tables instead of sets is not only fast but has linear space complexity as well.
   1.611 +
   1.612 +The recursion of \textbf{Algorithm~\ref{alg:VF2Pseu})} can be realized as a while loop, which has a loop counter $depth$ denoting the all-time depth of the recursion.
   1.613 +Fixing a matching order, let $M$ denote the array storing the all-time mapping.
   1.614 +The initial state is associated with the empty mapping, which means that $\forall i: M[i]=INVALID$ and $depth=0$.
   1.615 +In case of a recursive call, $depth$ has to be incremented, while in case of a return, it has to be decremented.
   1.616 +Based on \textbf{Claim~\ref{claim:claimCoverFromLeft})}, $M$ is $INVALID$ from index $depth$+1 and not $INVALID$ before $depth$, i.e. $\forall i: i < depth \Rightarrow M[i]\neq INVALID$ and $\forall i: i > depth \Rightarrow M[i]= INVALID$. $M[depth]$ changes while the state is being processed, but the property is held before both stepping back to a predecessor state and exploring a successor state.
   1.617 +
   1.618 +The necessary part of the candidate set is easily maintainable or computable by following \textbf{Section~\ref{candidateComputingVF2})}. A much faster method has been designed for biological- and sparse graphs, see the next section for details.
   1.619 +
   1.620 +\subsubsection{Calculating the candidates for a node}
   1.621 +Being aware of \textbf{Claim~\ref{claim:claimCoverFromLeft})}, the task is not to maintain the candidate set, but to generate the candidate nodes in $G_{large}$ for a given node $u\in V_{small}$.
   1.622 +In case of an expanding problem type and $M$ mapping, if a node $v\in V_{large}$ is a potential pair of $u\in V_{small}$, then $\forall u'\in V_{small} : (u,u')\in E_{small}\ and\ u'\ is\ covered\ by\ M\ \Rightarrow (v,Pair(M,u'))\in E_{large}$. That is, each covered neighbour of $u$ has to be mapped to a covered neighbour of $v$.
   1.623 +
   1.624 +Having said that, an algorithm running in $\Theta(deg)$ time is describable if there exists a covered node in the component containing $u$. In this case choose a covered neighbour $u'$ of $u$ arbitrarily --- such a node exists based on \textbf{Claim~\ref{claim:MOclaim})}. With all the candidates of $u$ being among the uncovered neighbours of $Pair(M,u')$, there are solely $deg(Pair(M,u'))$ nodes to check.
   1.625 +
   1.626 +An easy trick is to choose an $u'$, for which $|\{uncovered\ neighbours\ $ $of\ Pair(M,u')\}|$ is the smallest possible.
   1.627 +
   1.628 +Note that if $u$ is the first node of its component, then all the uncovered nodes of $G_{large}$ are candidates, so giving a sublinear method is impossible.
   1.629 +
   1.630 +
   1.631 +\subsubsection{Determining the node order}
   1.632 +This section describes how the node order preprocessing method of VF2++ can efficiently be implemented.
   1.633 +
   1.634 +For using lookup tables, the node labels are associated with the numbers $\{0,1,..,|K|-1\}$, where $K$ is the set of the labels. It enables $F_\mathcal{M}$ to be stored in an array, for which $F_\mathcal{M}[i]=F_\mathcal{M}(i)$ where $i=0,1,..,|K|-1$. At first, $\mathcal{M}=\emptyset$, so $F_\mathcal{M}[i]$ is the number of nodes in $V_{small}$ having label i, which is easy to compute in $\Theta(|V_{small}|)$ steps.
   1.635 +
   1.636 +$\mathcal{M}\subseteq V_{small}$ can be represented as an array of size $|V_{small}|$.
   1.637 +
   1.638 +The BFS tree is computed by using a FIFO data structure which is usually implemented as a linked list, but one can avoid it by using the array $\mathcal{M}$ itself. $\mathcal{M}$ contains all the nodes seen before, a pointer shows where the first node of the FIFO is, and another one shows where the next explored node has to be inserted. So the nodes of each level of the BFS tree can be processed by \textbf{Algorithm \ref{alg:VF2PPProcess1})} and \textbf{\ref{alg:VF2PPProcess2})} in place by swapping nodes.
   1.639 +
   1.640 +After a node $u$ gets to the next place of the node order, $F_\mathcal{M}[lab[u]]$ has to be decreased by one, because there is one less covered node in $V_{large}$ with label $lab(u)$, that is why min selection sort is preferred which gives the elements from left to right in descending order, see \textbf{Algorithm \ref{alg:VF2PPProcess1})}.
   1.641 +
   1.642 +Note that using a $\Theta(n^2)$ sort absolutely does not slow down the procedure on biological (and on sparse) graphs, since they have few nodes on a level. If a level had a large number of nodes, \textbf{Algorithm \ref{alg:VF2PPProcess2})} would seem to be a better choice with a $\Theta(nlog(n))$ or Bucket sort, but it may reduce the efficiency of the matching procedure, since $F_\mathcal{M}(i)$ can not be immediately refreshed, so it is unable to provide up-to-date label information. 
   1.643 +
   1.644 +Note that the \textit{while loop} of \textbf{Algorithm \ref{alg:VF2PPPseu})} takes one iteration per graph component and the graphs in biology are mostly connected.
   1.645 +\subsubsection{Cutting rules}
   1.646 +In \textbf{Section \ref{VF2PPCuttingRules})}, the cutting rules were described using the sets $T_{small}$, $T_{large}$, $\tilde T_{small}$ and $\tilde T_{large}$, which are dependent on the all-time mapping (i.e. on the all-time state). The aim is to check the labeled cutting rules of VF2++ in $\Theta(deg)$ time.
   1.647 +
   1.648 +Firstly, suppose that these four sets are given in such a way, that checking whether a node is in a certain set takes constant time, e.g. they are given by their 0-1 characteristic vectors. Let $L$ be an initially zero integer lookup table of size $|K|$. After incrementing $L[lab(u')]$ for all $u'\in \Gamma_{small}(u) \cap T_{small}(s)$ and decrementing $L[lab(v')]$ for all $v'\in\Gamma_{large} (v) \cap T_{large}(s)$, the first part of the cutting rules is checkable in $\Theta(deg)$ time by considering the proper signs of $L$. Setting $L$ to zero takes $\Theta(deg)$ time again, which makes it possible to use the same table through the whole algorithm.
   1.649 +The second part of the cutting rules can be verified using the same method with $\tilde T_{small}$ and $\tilde T_{large}$ instead of $T_{small}$ and $T_{large}$. Thus, the overall complexity is $\Theta(deg)$.
   1.650 +
   1.651 +An other integer lookup table storing the number of covered neighbours of each node in $G_{large}$ gives all the information about the sets $T_{large}$ and $\tilde T_{large}$, which is maintainable in $\Theta(deg)$ time when a pair is added or substracted by incrementing or decrementing the proper indices. A further improvement is that the values of $L[lab(u')]$ in case of checking $u$ is dependent only on $u$, i.e. on the size of the mapping, so for each $u\in V_{small}$ an array of pairs (label, number of such labels) can be stored to skip the maintaining operations. Note that these arrays are at most of size $deg$. Skipping this trick, the number of covered neighbours has to be stored for each node of $G_{small}$ as well to get the sets $T_{small}$ and $\tilde T_{small}$.
   1.652 +
   1.653 +Using similar tricks, the consistency function can be evaluated in $\Theta(deg)$ steps, as well.
   1.654 +
   1.655 +\section{The VF2 Plus Algorithm}
   1.656 +The VF2 Plus algorithm is a recently improved version of VF2. It was compared with the state of the art algorithms in \cite{VF2Plus} and has proven itself to be competitive with RI, the best algorithm on biological graphs.
   1.657 +\\
   1.658 +A short summary of VF2 Plus follows, which uses the notation and the conventions of the original paper.
   1.659 +
   1.660 +\subsection{Ordering procedure}
   1.661 +VF2 Plus uses a sorting procedure that prefers nodes in $V_{small}$ with the lowest probability to find a pair in $V_{small}$ and the highest number of connections with the nodes already sorted by the algorithm.
   1.662 +
   1.663 +\begin{definition}
   1.664 +$(u,v)$ is a \textbf{feasible pair}, if $lab(u)=lab(v)$ and $deg(u)\leq deg(v)$, where $u\in{V_{small}}$ and $ v\in{V_{large}}$. 
   1.665 +\end{definition}
   1.666 +$P_{lab}(L):=$ a priori probability to find a node with label $L$ in $V_{large}$
   1.667 +\newline
   1.668 +$P_{deg}(d):=$ a priori probability to find a node with degree $d$ in $V_{large}$
   1.669 +\newline
   1.670 +$P(u):=P_{lab}(L)*\bigcup_{d'>d}P_{deg}(d')$\\
   1.671 +$M$ is the set of already sorted nodes, $T$ is the set of nodes candidate to be selected, and $degreeM$ of a node is the number of its neighbours in $M$.
   1.672 +\begin{algorithm}
   1.673 +\algtext*{EndIf}%ne nyomtasson end if-et
   1.674 +\algtext*{EndFor}%ne nyomtasson ..
   1.675 +\algtext*{EndProcedure}%ne nyomtasson ..
   1.676 +\algtext*{EndWhile}
   1.677 +\caption{}\label{alg:VF2PlusPseu}
   1.678 +\begin{algorithmic}[1]
   1.679 +\Procedure{VF2 Plus order}{}
   1.680 +  \State Select the node with the lowest $P$.
   1.681 +    \If {more nodes share the same $P$}
   1.682 +    \State select the one with maximum degree
   1.683 +    \EndIf
   1.684 +    \If {more nodes share the same $P$ and have the max degree}
   1.685 +    \State select the first
   1.686 +    \EndIf
   1.687 +  \State Put the selected node in the set $M$. \label{alg:putIn}
   1.688 +  \State Put all its unsorted neighbours in the set $T$.
   1.689 +  \If {$M\neq V_{small}$}
   1.690 +  \State From set $T$ select the node with maximum $degreeM$.
   1.691 +  \If {more nodes have maximum $degreeM$}
   1.692 +    \State Select the one with the lowest $P$
   1.693 +  \EndIf
   1.694 +  \If {more nodes have maximum $degreeM$ and $P$}
   1.695 +    \State Select the first.
   1.696 +  \EndIf
   1.697 +  \State \textbf{goto \ref{alg:putIn}.}
   1.698 +  \EndIf
   1.699 +\EndProcedure
   1.700 +\end{algorithmic}
   1.701 +\end{algorithm}
   1.702 +
   1.703 +Using these notations, \textbf{Algorithm~\ref{alg:VF2PlusPseu})} provides the description of the sorting procedure.
   1.704 +
   1.705 +Note that $P(u)$ is not the exact probability of finding a consistent pair for $u$ by choosing a node of $V_{large}$ randomly, since $P_{lab}$ and $P_{deg}$ are not independent, though calculating the real probability would take quadratic time, which may be reduced by using fittingly lookup tables.
   1.706 +
   1.707 +\newpage
   1.708 +\section{Experimental results}
   1.709 +This section compares the performance of VF2++ and VF2 Plus. Both algorithms have run faster with orders of magnitude than VF2, thus its inclusion was not reasonable.
   1.710 +\subsection{Biological graphs}
   1.711 +The tests have been executed on a recent biological dataset created for the International Contest on Pattern Search in Biological Databases\cite{Content}, which has been constructed of Molecule, Protein and Contact Map graphs extracted from the Protein Data Bank\cite{ProteinDataBank}.
   1.712 +
   1.713 +The molecule dataset contains small graphs with less than 100 nodes and an average degree of less than 3. The protein dataset contains graphs having 500-10 000 nodes and an average degree of 4, while the contact map dataset contains graphs with 150-800 nodes and an average degree of 20.
   1.714 +\\
   1.715 +
   1.716 +In the following, the induced subgraph isomorphism and the graph isomorphism will be examined.
   1.717 +
   1.718 +\subsubsection{Induced subgraph isomorphism}
   1.719 +This dataset contains a set of graph pairs, and \textbf{all} the induced subgraph ismorphisms have to be found between them. \textbf{Figure \ref{fig:INDProt}), \ref{fig:INDContact}),} and \textbf{ \ref{fig:INDMolecule})} show the solution time of the problems in the problem set.
   1.720 +
   1.721 +\begin{figure}[H]
   1.722 +\begin{center}
   1.723 +\begin{tikzpicture}
   1.724 +  \begin{axis}[title=Proteins IND,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.725 +  =major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label        style={/pgf/number format/1000 sep = \thinspace}]
   1.726 +  %\addplot+[only marks] table {proteinsOrig.txt};
   1.727 +  \addplot[mark=*,mark size=1.2pt,color=blue] table {Orig/Proteins.256.txt};
   1.728 +  \addplot[mark=triangle*,mark size=1.8pt,color=red] table {VF2PPLabel/Proteins.256.txt};
   1.729 +  \end{axis}
   1.730 +  \end{tikzpicture}
   1.731 +\end{center}
   1.732 +\vspace*{-0.8cm}
   1.733 +\caption{Both the algorithms have linear behaviour on protein graphs. VF2++ is more than 10 times faster than VF2 Plus.} \label{fig:INDProt}
   1.734 +\end{figure}
   1.735 +
   1.736 +\begin{figure}[H]
   1.737 +\begin{center}
   1.738 +\begin{tikzpicture}
   1.739 +\begin{axis}[title=Contact Maps IND,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.740 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.741 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.742 +\addplot table {Orig/ContactMaps.128.txt};
   1.743 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {VF2PPLabel/ContactMaps.128.txt};
   1.744 +\end{axis}
   1.745 +\end{tikzpicture}
   1.746 +\end{center}
   1.747 +\vspace*{-0.8cm}
   1.748 +\caption{On Contact Maps, VF2++ runs in near constant time, while VF2 Plus has a near linear behaviour.} \label{fig:INDContact}
   1.749 +\end{figure}
   1.750 +
   1.751 +\begin{figure}[H]
   1.752 +\begin{center}
   1.753 +\begin{tikzpicture}
   1.754 +\begin{axis}[title=Molecules IND,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.755 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.756 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.757 +\addplot table {Orig/Molecules.32.txt};
   1.758 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {VF2PPLabel/Molecules.32.txt};
   1.759 +\end{axis}
   1.760 +\end{tikzpicture}
   1.761 +\end{center}
   1.762 +\vspace*{-0.8cm}
   1.763 +\caption{In the case of Molecules, the algorithms seem to have a similar behaviour, but VF2++ is almost two times faster even on such small graphs.} \label{fig:INDMolecule}
   1.764 +\end{figure}
   1.765 +
   1.766 +
   1.767 +\subsubsection{Graph ismorphism}
   1.768 +In this experiment, the nodes of each graph in the database have been shuffled and an isomorphism between the shuffled and the original graph has been searched. For runtime results, see \textbf{Figure \ref{fig:ISOProt}), \ref{fig:ISOContact}),} and \textbf{\ref{fig:ISOMolecule})}.
   1.769 +\begin{figure}[H]
   1.770 +\begin{center}
   1.771 +\begin{tikzpicture}
   1.772 +\begin{axis}[title=Proteins ISO,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.773 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.774 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.775 +\addplot table {Orig/proteinsIso.txt};
   1.776 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {VF2PPLabel/proteinsIso.txt};
   1.777 +\end{axis}
   1.778 +\end{tikzpicture}
   1.779 +\end{center}
   1.780 +\vspace*{-0.8cm}
   1.781 +\caption{On protein graphs, VF2 Plus has a super linear time complexity, while VF2++ runs in near constant time. The difference is about two order of magnitude on large graphs.}\label{fig:ISOProt}
   1.782 +\end{figure}
   1.783 +
   1.784 +\begin{figure}[H]
   1.785 +\begin{center}
   1.786 +\begin{tikzpicture}
   1.787 +\begin{axis}[title=Contact Maps ISO,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.788 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.789 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.790 +\addplot table {Orig/contactMapsIso.txt};
   1.791 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {VF2PPLabel/contactMapsIso.txt};
   1.792 +\end{axis}
   1.793 +\end{tikzpicture}
   1.794 +\end{center}
   1.795 +\vspace*{-0.8cm}
   1.796 +\caption{The results are closer to each other on Contact Maps, but VF2++ still performs consistently better.}\label{fig:ISOContact}
   1.797 +\end{figure}
   1.798 +
   1.799 +\begin{figure}[H]
   1.800 +\begin{center}
   1.801 +\begin{tikzpicture}
   1.802 +\begin{axis}[title=Molecules ISO,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.803 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.804 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.805 +\addplot table {Orig/moleculesIso.txt};
   1.806 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {VF2PPLabel/moleculesIso.txt};
   1.807 +\end{axis}
   1.808 +\end{tikzpicture}
   1.809 +\end{center}
   1.810 +\vspace*{-0.8cm}
   1.811 +\caption{In the case of Molecules, there is not such a significant difference, but VF2++ seems to be faster as the number of nodes increases.}\label{fig:ISOMolecule}
   1.812 +\end{figure}
   1.813 +
   1.814 +
   1.815 +\subsection{Random graphs}
   1.816 +This section compares VF2++ with VF2 Plus on random graphs of a large size. The node labels are uniformly distributed.
   1.817 +Let $\delta$ denote the average degree.
   1.818 +For the parameters of problems solved in the experiments, please see the top of each chart.
   1.819 +\subsubsection{Graph isomorphism}
   1.820 +To evaluate the efficiency of the algorithms in the case of graph isomorphism, connected graphs of less than 20 000 nodes have been considered. Generating a random graph and shuffling its nodes, an isomorphism had to be found. \textbf{Figure \ref{fig:randISO5}), \ref{fig:randISO10}), \ref{fig:randISO15}), \ref{fig:randISO35}), \ref{fig:randISO45}),} and \textbf{\ref{fig:randISO100}) } show the runtime results on graph sets of various density.
   1.821 +
   1.822 +\begin{figure}[H]
   1.823 +\begin{center}
   1.824 +\begin{tikzpicture}
   1.825 +\begin{axis}[title={Random ISO, $\delta = 5$},xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.826 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.827 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.828 +\addplot table {randGraph/iso/vf2pIso5_1.txt};
   1.829 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/iso/vf2ppIso5_1.txt};
   1.830 +\end{axis}
   1.831 +\end{tikzpicture}
   1.832 +\end{center}
   1.833 +\vspace*{-0.8cm}
   1.834 +\caption{}\label{fig:randISO5}
   1.835 +\end{figure}
   1.836 +
   1.837 +\begin{figure}[H]
   1.838 +\begin{center}
   1.839 +\begin{tikzpicture}
   1.840 +\begin{axis}[title={Random ISO, $\delta = 10$},xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.841 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.842 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.843 +\addplot table {randGraph/iso/vf2pIso10_1.txt};
   1.844 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/iso/vf2ppIso10_1.txt};
   1.845 +\end{axis}
   1.846 +\end{tikzpicture}
   1.847 +\end{center}
   1.848 +\vspace*{-0.8cm}
   1.849 +\caption{}\label{fig:randISO10}
   1.850 +\end{figure}
   1.851 +
   1.852 +\begin{figure}[H]
   1.853 +\begin{center}
   1.854 +\begin{tikzpicture}
   1.855 +\begin{axis}[title={Random ISO, $\delta = 15$},xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.856 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.857 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.858 +\addplot table {randGraph/iso/vf2pIso15_1.txt};
   1.859 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/iso/vf2ppIso15_1.txt};
   1.860 +\end{axis}
   1.861 +\end{tikzpicture}
   1.862 +\end{center}
   1.863 +\vspace*{-0.8cm}
   1.864 +\caption{}\label{fig:randISO15}
   1.865 +\end{figure}
   1.866 +
   1.867 +\begin{figure}[H]
   1.868 +\begin{center}
   1.869 +\begin{tikzpicture}
   1.870 +\begin{axis}[title={Random ISO, $\delta = 35$},xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.871 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.872 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.873 +\addplot table {randGraph/iso/vf2pIso35_1.txt};
   1.874 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/iso/vf2ppIso35_1.txt};
   1.875 +\end{axis}
   1.876 +\end{tikzpicture}
   1.877 +\end{center}
   1.878 +\vspace*{-0.8cm}
   1.879 +\caption{}\label{fig:randISO35}
   1.880 +\end{figure}
   1.881 +
   1.882 +\begin{figure}[H]
   1.883 +\begin{center}
   1.884 +\begin{tikzpicture}
   1.885 +\begin{axis}[title={Random ISO, $\delta = 45$},xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.886 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.887 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.888 +\addplot table {randGraph/iso/vf2pIso45_1.txt};
   1.889 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/iso/vf2ppIso45_1.txt};
   1.890 +\end{axis}
   1.891 +\end{tikzpicture}
   1.892 +\end{center}
   1.893 +\vspace*{-0.8cm}
   1.894 +\caption{}\label{fig:randISO45}
   1.895 +\end{figure}
   1.896 +
   1.897 +\begin{figure}[H]
   1.898 +\begin{center}
   1.899 +\begin{tikzpicture}
   1.900 +\begin{axis}[title={Random ISO, $\delta = 100$},xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},grid
   1.901 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.902 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.903 +\addplot table {randGraph/iso/vf2pIso100_1.txt};
   1.904 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/iso/vf2ppIso100_1.txt};
   1.905 +\end{axis}
   1.906 +\end{tikzpicture}
   1.907 +\end{center}
   1.908 +\vspace*{-0.8cm}
   1.909 +\caption{}\label{fig:randISO100}
   1.910 +\end{figure}
   1.911 +
   1.912 +
   1.913 +Considering the graph isomorphism problem, VF2++ consistently outperforms its rival especially on sparse graphs. The reason for the slightly super linear behaviour of VF2++ on denser graphs is the larger number of nodes in the BFS tree constructed in \textbf{Algorithm \ref{alg:VF2PPPseu})}.
   1.914 +
   1.915 +\subsubsection{Induced subgraph isomorphism}
   1.916 +This section provides a comparison of VF2++ and VF2 Plus in the case of induced subgraph isomorphism. In addition to the size of the large graph, that of the small graph dramatically influences the hardness of a given problem too, so the overall picture is provided by examining small graphs of various size.
   1.917 +
   1.918 +For each chart, a number $0<\rho< 1$ has been fixed and the following has been executed 150 times. Generating a large graph $G_{large}$, choose 10 of its induced subgraphs having $\rho\ |V_{large}|$ nodes, and for all the 10 subgraphs find a mapping by using both the graph matching algorithms.
   1.919 +The $\delta = 5, 10, 35$ and $\rho = 0.05, 0.1, 0.3, 0.6, 0.8, 0.95$ cases have been examined (see \textbf{Figure \ref{fig:randIND5}), \ref{fig:randIND10})} and \textbf{\ref{fig:randIND35})}), and for each $\delta$, a cumulative chart is given as well, which excludes $\rho = 0.05$ and $0.1$ for the sake of perspicuity (see \textbf{Figure \ref{fig:randIND5Sum}), \ref{fig:randIND10Sum})} and \textbf{\ref{fig:randIND35Sum})}).
   1.920 +
   1.921 +
   1.922 +
   1.923 +
   1.924 +
   1.925 +\begin{figure}[H]
   1.926 +\vspace*{-0.8cm}
   1.927 +\begin{subfigure}[b]{0.55\textwidth}
   1.928 +\begin{center}
   1.929 +\begin{tikzpicture}
   1.930 +\begin{axis}[title={Random IND, $\delta = 5$, $\rho = 0.05$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
   1.931 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
   1.932 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.933 +\addplot table {randGraph/ind/vf2pInd5_0.05.txt};
   1.934 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.05.txt};
   1.935 +\end{axis}
   1.936 +\end{tikzpicture}
   1.937 +\end{center}
   1.938 +     \end{subfigure}
   1.939 +     \begin{subfigure}[b]{0.55\textwidth}
   1.940 +\begin{center}
   1.941 +\begin{tikzpicture}
   1.942 +\begin{axis}[title={Random IND, $\delta = 5$, $\rho = 0.1$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
   1.943 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
   1.944 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.945 +\addplot table {randGraph/ind/vf2pInd5_0.1.txt};
   1.946 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.1.txt};
   1.947 +\end{axis}
   1.948 +\end{tikzpicture}
   1.949 +\end{center}
   1.950 +\end{subfigure}
   1.951 +\hspace{1cm}
   1.952 +\begin{subfigure}[b]{0.55\textwidth}
   1.953 +\begin{center}
   1.954 +\begin{tikzpicture}
   1.955 +\begin{axis}[title={Random IND, $\delta = 5$, $\rho = 0.3$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
   1.956 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
   1.957 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.958 +\addplot table {randGraph/ind/vf2pInd5_0.3.txt};
   1.959 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.3.txt};
   1.960 +\end{axis}
   1.961 +\end{tikzpicture}
   1.962 +\end{center}
   1.963 +     \end{subfigure}
   1.964 +     \begin{subfigure}[b]{0.55\textwidth}
   1.965 +\begin{center}
   1.966 +\begin{tikzpicture}
   1.967 +\begin{axis}[title={Random IND, $\delta = 5$, $\rho = 0.6$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
   1.968 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
   1.969 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.970 +\addplot table {randGraph/ind/vf2pInd5_0.6.txt};
   1.971 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.6.txt};
   1.972 +\end{axis}
   1.973 +\end{tikzpicture}
   1.974 +\end{center}
   1.975 +\end{subfigure}
   1.976 +\begin{subfigure}[b]{0.55\textwidth}
   1.977 +          
   1.978 +\begin{tikzpicture}
   1.979 +\begin{axis}[title={Random IND, $\delta = 5$, $\rho = 0.8$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
   1.980 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
   1.981 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.982 +\addplot table {randGraph/ind/vf2pInd5_0.8.txt};
   1.983 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.8.txt};
   1.984 +\end{axis}
   1.985 +\end{tikzpicture}
   1.986 +     \end{subfigure}
   1.987 +     \begin{subfigure}[b]{0.55\textwidth}
   1.988 +\begin{tikzpicture}
   1.989 +\begin{axis}[title={Random IND, $\delta = 5$, $\rho = 0.95$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
   1.990 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
   1.991 +%\addplot+[only marks] table {proteinsOrig.txt};
   1.992 +\addplot table {randGraph/ind/vf2pInd5_0.95.txt};
   1.993 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.95.txt};
   1.994 +\end{axis}
   1.995 +\end{tikzpicture}
   1.996 +\end{subfigure}
   1.997 +\vspace*{-0.8cm}
   1.998 +\caption{IND on graphs having an average degree of 5.}\label{fig:randIND5}
   1.999 +\end{figure}
  1.1000 +
  1.1001 +\begin{figure}[H]
  1.1002 +\begin{center}
  1.1003 +\begin{tikzpicture}
  1.1004 +\begin{axis}[title={Rand IND Summary, $\delta = 5$, $\rho = 0.3, 0.6, 0.8, 0.95$},height=17cm,width=16cm,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},line width=0.8pt,grid
  1.1005 +=major,mark size=1pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
  1.1006 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1007 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd5_0.3.txt};
  1.1008 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.3.txt};
  1.1009 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd5_0.6.txt};
  1.1010 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.6.txt};
  1.1011 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd5_0.8.txt};
  1.1012 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.8.txt};
  1.1013 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd5_0.95.txt};
  1.1014 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd5_0.95.txt};
  1.1015 +\end{axis}
  1.1016 +\end{tikzpicture}
  1.1017 +\end{center}
  1.1018 +\vspace*{-0.8cm}
  1.1019 +\caption{Cummulative chart for $\delta=5$.}\label{fig:randIND5Sum}
  1.1020 +\end{figure}
  1.1021 +
  1.1022 +
  1.1023 +
  1.1024 +\begin{figure}[H]
  1.1025 +\vspace*{-0.8cm}
  1.1026 +\begin{subfigure}[b]{0.55\textwidth}
  1.1027 +\begin{center}
  1.1028 +\begin{tikzpicture}
  1.1029 +\begin{axis}[title={Random IND, $\delta = 10$, $\rho = 0.05$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1030 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1031 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1032 +\addplot table {randGraph/ind/vf2pInd10_0.05.txt};
  1.1033 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.05.txt};
  1.1034 +\end{axis}
  1.1035 +\end{tikzpicture}
  1.1036 +\end{center}
  1.1037 +     \end{subfigure}
  1.1038 +     \begin{subfigure}[b]{0.55\textwidth}
  1.1039 +\begin{center}
  1.1040 +\begin{tikzpicture}
  1.1041 +\begin{axis}[title={Random IND, $\delta = 10$, $\rho = 0.1$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1042 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1043 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1044 +\addplot table {randGraph/ind/vf2pInd10_0.1.txt};
  1.1045 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.1.txt};
  1.1046 +\end{axis}
  1.1047 +\end{tikzpicture}
  1.1048 +\end{center}
  1.1049 +\end{subfigure}
  1.1050 +\hspace{1cm}
  1.1051 +\begin{subfigure}[b]{0.55\textwidth}
  1.1052 +\begin{center}
  1.1053 +\begin{tikzpicture}
  1.1054 +\begin{axis}[title={Random IND, $\delta = 10$, $\rho = 0.3$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1055 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1056 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1057 +\addplot table {randGraph/ind/vf2pInd10_0.3.txt};
  1.1058 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.3.txt};
  1.1059 +\end{axis}
  1.1060 +\end{tikzpicture}
  1.1061 +\end{center}
  1.1062 +     \end{subfigure}
  1.1063 +     \begin{subfigure}[b]{0.55\textwidth}
  1.1064 +\begin{center}
  1.1065 +\begin{tikzpicture}
  1.1066 +\begin{axis}[title={Random IND, $\delta = 10$, $\rho = 0.6$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1067 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1068 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1069 +\addplot table {randGraph/ind/vf2pInd10_0.6.txt};
  1.1070 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.6.txt};
  1.1071 +\end{axis}
  1.1072 +\end{tikzpicture}
  1.1073 +\end{center}
  1.1074 +\end{subfigure}
  1.1075 +\begin{subfigure}[b]{0.55\textwidth}
  1.1076 +          
  1.1077 +\begin{tikzpicture}
  1.1078 +\begin{axis}[title={Random IND, $\delta = 10$, $\rho = 0.8$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1079 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1080 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1081 +\addplot table {randGraph/ind/vf2pInd10_0.8.txt};
  1.1082 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.8.txt};
  1.1083 +\end{axis}
  1.1084 +\end{tikzpicture}
  1.1085 +     \end{subfigure}
  1.1086 +     \begin{subfigure}[b]{0.55\textwidth}
  1.1087 +\begin{tikzpicture}
  1.1088 +\begin{axis}[title={Random IND, $\delta = 10$, $\rho = 0.95$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1089 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
  1.1090 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1091 +\addplot table {randGraph/ind/vf2pInd10_0.95.txt};
  1.1092 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.95.txt};
  1.1093 +\end{axis}
  1.1094 +\end{tikzpicture}
  1.1095 +\end{subfigure}
  1.1096 +\vspace*{-0.8cm}
  1.1097 +\caption{IND on graphs having an average degree of 10.}\label{fig:randIND10}
  1.1098 +\end{figure}
  1.1099 +
  1.1100 +\begin{figure}[H]
  1.1101 +\begin{center}
  1.1102 +\begin{tikzpicture}
  1.1103 +\begin{axis}[title={Rand IND Summary, $\delta = 10$, $\rho = 0.3, 0.6, 0.8, 0.95$},height=17cm,width=16cm,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},line width=0.8pt,grid
  1.1104 +=major,mark size=1pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
  1.1105 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1106 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd10_0.3.txt};
  1.1107 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.3.txt};
  1.1108 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd10_0.6.txt};
  1.1109 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.6.txt};
  1.1110 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd10_0.8.txt};
  1.1111 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.8.txt};
  1.1112 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd10_0.95.txt};
  1.1113 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd10_0.95.txt};
  1.1114 +\end{axis}
  1.1115 +\end{tikzpicture}
  1.1116 +\end{center}
  1.1117 +\vspace*{-0.8cm}
  1.1118 +\caption{Cummulative chart for $\delta=10$.}\label{fig:randIND10Sum}
  1.1119 +\end{figure}
  1.1120 +
  1.1121 +
  1.1122 +
  1.1123 +\begin{figure}[H]
  1.1124 +\vspace*{-0.8cm}
  1.1125 +\begin{subfigure}[b]{0.55\textwidth}
  1.1126 +\begin{center}
  1.1127 +\begin{tikzpicture}
  1.1128 +\begin{axis}[title={Random IND, $\delta = 35$, $\rho = 0.05$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1129 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1130 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1131 +\addplot table {randGraph/ind/vf2pInd35_0.05.txt};
  1.1132 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.05.txt};
  1.1133 +\end{axis}
  1.1134 +\end{tikzpicture}
  1.1135 +\end{center}
  1.1136 +     \end{subfigure}
  1.1137 +     \begin{subfigure}[b]{0.55\textwidth}
  1.1138 +\begin{center}
  1.1139 +\begin{tikzpicture}
  1.1140 +\begin{axis}[title={Random IND, $\delta = 35$, $\rho = 0.1$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1141 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1142 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1143 +\addplot table {randGraph/ind/vf2pInd35_0.1.txt};
  1.1144 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.1.txt};
  1.1145 +\end{axis}
  1.1146 +\end{tikzpicture}
  1.1147 +\end{center}
  1.1148 +\end{subfigure}
  1.1149 +\hspace{1cm}
  1.1150 +\begin{subfigure}[b]{0.55\textwidth}
  1.1151 +\begin{center}
  1.1152 +\begin{tikzpicture}
  1.1153 +\begin{axis}[title={Random IND, $\delta = 35$, $\rho = 0.3$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1154 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1155 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1156 +\addplot table {randGraph/ind/vf2pInd35_0.3.txt};
  1.1157 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.3.txt};
  1.1158 +\end{axis}
  1.1159 +\end{tikzpicture}
  1.1160 +\end{center}
  1.1161 +     \end{subfigure}
  1.1162 +     \begin{subfigure}[b]{0.55\textwidth}
  1.1163 +\begin{center}
  1.1164 +\begin{tikzpicture}
  1.1165 +\begin{axis}[title={Random IND, $\delta = 35$, $\rho = 0.6$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1166 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1167 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1168 +\addplot table {randGraph/ind/vf2pInd35_0.6.txt};
  1.1169 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.6.txt};
  1.1170 +\end{axis}
  1.1171 +\end{tikzpicture}
  1.1172 +\end{center}
  1.1173 +\end{subfigure}
  1.1174 +\begin{subfigure}[b]{0.55\textwidth}
  1.1175 +          
  1.1176 +\begin{tikzpicture}
  1.1177 +\begin{axis}[title={Random IND, $\delta = 35$, $\rho = 0.8$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1178 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \space}]
  1.1179 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1180 +\addplot table {randGraph/ind/vf2pInd35_0.8.txt};
  1.1181 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.8.txt};
  1.1182 +\end{axis}
  1.1183 +\end{tikzpicture}
  1.1184 +     \end{subfigure}
  1.1185 +     \begin{subfigure}[b]{0.55\textwidth}
  1.1186 +\begin{tikzpicture}
  1.1187 +\begin{axis}[title={Random IND, $\delta = 35$, $\rho = 0.95$},width=7.2cm,height=6cm,xlabel={target size},ylabel={time (ms)},ylabel near ticks,legend entries={VF2 Plus,VF2++},grid
  1.1188 +=major,mark size=1.2pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
  1.1189 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1190 +\addplot table {randGraph/ind/vf2pInd35_0.95.txt};
  1.1191 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.95.txt};
  1.1192 +\end{axis}
  1.1193 +\end{tikzpicture}
  1.1194 +\end{subfigure}
  1.1195 +\vspace*{-0.8cm}
  1.1196 +\caption{IND on graphs having an average degree of 35.}\label{fig:randIND35}
  1.1197 +\end{figure}
  1.1198 +
  1.1199 +\begin{figure}[H]
  1.1200 +\begin{center}
  1.1201 +\begin{tikzpicture}
  1.1202 +\begin{axis}[title={Rand IND Summary, $\delta = 35$, $\rho = 0.3, 0.6, 0.8, 0.95$},height=17cm,width=16cm,xlabel={target size},ylabel={time (ms)},legend entries={VF2 Plus,VF2++},line width=0.8pt,grid
  1.1203 +=major,mark size=1pt, legend style={at={(0,1)},anchor=north west},scaled x ticks = false,x tick label style={/pgf/number format/1000 sep = \thinspace}]
  1.1204 +%\addplot+[only marks] table {proteinsOrig.txt};
  1.1205 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd35_0.3.txt};
  1.1206 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.3.txt};
  1.1207 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd35_0.6.txt};
  1.1208 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.6.txt};
  1.1209 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd35_0.8.txt};
  1.1210 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.8.txt};
  1.1211 +\addplot[mark=*,mark size=1.5pt,color=blue] table {randGraph/ind/vf2pInd35_0.95.txt};
  1.1212 +\addplot[mark=triangle*,mark size=1.8pt,color=red] table {randGraph/ind/vf2ppInd35_0.95.txt};
  1.1213 +\end{axis}
  1.1214 +\end{tikzpicture}
  1.1215 +\end{center}
  1.1216 +\vspace*{-0.8cm}
  1.1217 +\caption{Cummulative chart for $\delta=35$.}\label{fig:randIND35Sum}
  1.1218 +\end{figure}
  1.1219 +
  1.1220 +Based on these experiments, VF2++ is faster than VF2 Plus and able to handle really large graphs in milliseconds. Note that when $IND$ was considered and the small graphs had proportionally few nodes ($\rho = 0.05$, or $\rho = 0.1$), then VF2 Plus produced some inefficient node orders(e.g. see the $\delta=10$ case on \textbf{Figure \ref{fig:randIND10})}). If these examples had been excluded, the charts would have seemed to be similar to the other ones.
  1.1221 +Unsurprisingly, as denser graphs are considered, both VF2++ and VF2 Plus slow slightly down, but remain practically usable even on graphs having 10 000 nodes.
  1.1222 +
  1.1223 +
  1.1224 +
  1.1225 +
  1.1226 +\newpage
  1.1227 +\section{Conclusion}
  1.1228 +In this thesis, after providing a short summary of the recent algorithms, a new graph matching algorithm based on VF2, called VF2++, has been presented and analyzed from a practical viewpoint.
  1.1229 +
  1.1230 +Recognizing the importance of the node order and determining an efficient one, VF2++ is able to match graphs of thousands of nodes in near practically linear time including preprocessing. In addition to the proper order, VF2++ uses more efficient consistency and cutting rules which are easy to compute and make the algorithm able to prune most of the unfruitful branches without going astray.
  1.1231 +
  1.1232 +In order to show the efficiency of the new method, it has been compared to VF2 Plus, which is the best concurrent algorithm based on \cite{VF2Plus}.
  1.1233 +
  1.1234 +The experiments show that VF2++ consistently outperforms VF2 Plus on biological graphs. It seems to be asymptotically faster on protein and on contact map graphs in the case of induced subgraph isomorphism, while in the case of graph isomorphism, it has definitely better asymptotic behaviour on protein graphs.
  1.1235 +
  1.1236 +Regarding random sparse graphs, not only has VF2++ proved itself to be faster than VF2 Plus, but it has a practically linear behaviour both in the case of induced subgraph- and graph isomorphism, as well.
  1.1237 +
  1.1238 +
  1.1239  
  1.1240  %% The Appendices part is started with the command \appendix;
  1.1241  %% appendix sections are then done as normal sections
  1.1242 @@ -160,20 +1381,21 @@
  1.1243  %% If you have bibdatabase file and want bibtex to generate the
  1.1244  %% bibitems, please use
  1.1245  %%
  1.1246 -%%  \bibliographystyle{elsarticle-num} 
  1.1247 -%%  \bibliography{<your bibdatabase>}
  1.1248 +\bibliographystyle{elsarticle-num} 
  1.1249 +\bibliography{bibliography}
  1.1250  
  1.1251  %% else use the following coding to input the bibitems directly in the
  1.1252  %% TeX file.
  1.1253  
  1.1254 -\begin{thebibliography}{00}
  1.1255 +%% \begin{thebibliography}{00}
  1.1256  
  1.1257 -%% \bibitem{label}
  1.1258 -%% Text of bibliographic item
  1.1259 +%% %% \bibitem{label}
  1.1260 +%% %% Text of bibliographic item
  1.1261  
  1.1262 -\bibitem{}
  1.1263 +%% \bibitem{}
  1.1264  
  1.1265 -\end{thebibliography}
  1.1266 +%% \end{thebibliography}
  1.1267 +
  1.1268  \end{document}
  1.1269  \endinput
  1.1270  %%