Network-based protein structural classification

Experimental determination of protein function is resource-consuming. As an alternative, computational prediction of protein function has received attention. In this context, protein structural classification (PSC) can help, by allowing for determining structural classes of currently unclassified proteins based on their features, and then relying on the fact that proteins with similar structures have similar functions. Existing PSC approaches rely on sequence-based or direct three-dimensional (3D) structure-based protein features. By contrast, we first model 3D structures of proteins as protein structure networks (PSNs). Then, we use network-based features for PSC. We propose the use of graphlets, state-of-the-art features in many research areas of network science, in the task of PSC. Moreover, because graphlets can deal only with unweighted PSNs, and because accounting for edge weights when constructing PSNs could improve PSC accuracy, we also propose a deep learning framework that automatically learns network features from weighted PSNs. When evaluated on a large set of approximately 9400 CATH and approximately 12 800 SCOP protein domains (spanning 36 PSN sets), the best of our proposed approaches are superior to existing PSC approaches in terms of accuracy, with comparable running times. Our data and code are available at https://doi.org/10.5281/zenodo.3787922


Introduction
Motivation and related work.Proteins are major molecules of life, and thus understanding their cellular function is important.However, doing so experimentally is costly and time consuming 1 .Instead, computational approaches are often used for this purpose, which are much more efficient because they leverage on the fact that (sequence or 3-dimensional) structural similarity of proteins often indicates their functional similarity.One type of such computational approaches is protein structural classification (PSC) 2 .A PSC framework uses structural features of proteins with known labels (typically CATH 3 or SCOP 4 structural classes) to learn a classification model in a supervised manner (i.e, by including the labels into the process of training the model).Then, the structural feature of a protein with unknown label can be used as input to the classification model to determine the structural class of the protein.This information can in turn be used to predict function of a protein based on functions of other proteins that belong to the same structural class as the protein of interest.In this paper, we focus on the PSC problem.
Note that there exists a related computational problem which can help with protein function prediction -that of protein structural comparison 5 .However, unlike PSC: 1) protein structural comparison uses structural features of proteins with known or unknown labels in unsupervised rather than supervised manner (i.e., it ignores any potential label information), and 2) it uses the features to compute pairwise similarities between the proteins in hope that highly-similar proteins will have the same label (where the labels are used only after the fact), rather than predicting the label of a single protein.In other words, both the goals and working mechanisms of PSC and protein structural comparison are different.Hence, the two approach categories are not comparable to and cannot be fairly evaluated against each other.
Since proteins with high sequence similarity typically also have high 3-dimensional (3D) structural and functional similarity, traditional PSC approaches have relied only on sequence-based protein features 6 .However, proteins with low sequence similarity can still show high 3D structural and functional similarity 7 .On the other hand, proteins with high sequence similarity can have low 3D structural and functional similarity 8 .Hence, PSC based on 3D-structural as opposed to (or in addition to) sequence features could more correctly identify the structural class of a protein 9 .
Interestingly, recent 3D-structural approaches, although supervised, focused on classification based on protein pairs 2,10,11 .For example, they consider a pair of protein structures -one with known label (class) and the other one with unknown label, and if the proteins are similar enough in terms of their 3D-structural (and possibly also arXiv:1804.04725v1[q-bio.MN] 12 Apr 2018 sequence) features, they assign the known label of the currently classified protein to the currently unclassified protein.As such, these approaches fall somewhere in-between PSC (because both are supervised, but PSC analyzes a single protein at a time) and protein structural comparison (because both focus on protein pairs, but protein structural comparison is unsupervised).Therefore, they are not comparable to and cannot be directly evaluated against approaches that solve the PSC problem as defined in our study.Our extensive literature search did not reveal any recent 3D-structural approaches that can solve our considered PSC problem, which is why we do not consider such approaches in our evaluation.
Typically, 3D-structural approaches extract features directly from the 3D structures of proteins and then use these "raw" features 9,12 .In contrast, protein 3D structures can first be modeled using protein structure networks (PSNs), in which nodes are amino acids and edges link amino acids that are spatially close enough to each other.Then, network-based features can be extracted from the PSNs and used in the task of PSC.To our knowledge, no one has done this yet, and we aim to close this gap.
We believe that PSN-based PSC is promising.This is because we recently used PSN-based protein representations in the task of unsupervised protein structural comparison.Specifically, we proposed an approach called GRAFENE that relies on graphlets as PSN features of a protein 5 ; graphlets are subgraphs or small lego-like building blocks of complex networks 13 .Given a set of PSNs as input, GRAFENE first extracts different versions of graphlet features from each PSN.Then, it quantifies structural similarity between each pair of the PSNs by comparing their features.GRAFENE outperformed other state-of-the-art 3D-structural protein comparison approaches, including DaliLite 14 and TM-align 15 .Also, GRAFENE outperformed an existing non-graphlet PSN-based protein comparison approach, Existing-all, and a baseline sequence-based approach, AAComposition (see Methods).
Given that graphlet-based PSN features have been successful in the task of unsupervised protein structural comparison, here, we use them for the first time in the task of supervised PSC, with a hypothesis that they will improve upon state-of-the-art non-graphlet and non-PSN features that have traditionally been used in this task, when all features are run under the same classifier.Note that there exists a supervised approach that used graphlets to study proteins 16 .However, it did so in the task of functional classification of amino acids, i.e., nodes in a PSN, rather than in our task of structural classification of proteins, i.e., PSNs.Also, this approach only used the concept of regular graphlets, while we also test a newer concept of ordered graphlets 17 (see Methods), which outperformed regular graphlets in the GRAFENE study 5 .
In general, a PSC approach comprises of two key aspects: 1) a method to extract features from a protein structure and 2) selection of a classification algorithm to be trained based on the features (and protein labels).Hence, existing PSC approaches can be divided into two broad categories.The first category includes approaches that extract novel features to predict the structural class of a protein by relying on existing classification algorithms 6,18,19 .The second category are approaches that focus on improving a classification algorithm by relying on existing features [20][21][22] .Our study belongs to the first category, since our goal is to evaluate graphlet features against other state-of-the-art PSC features in a fair evaluation framework, i.e., under the same (representative) classifier, without necessarily aiming to find the best classifier.
Our contributions.We propose a new PSC framework called NETPCLASS (network-based protein structural classification).As one part of our framework, we propose the use of graphlet-and thus PSN-based protein features in the PSC task under an existing classification algorithm.As another part of our framework, we aim to achieve the following.Graphlets can deal only with edge-unweighted networks.Yet, we hypothesize that the existing PSN definition, which links with unweighted edges those pairs of amino acids whose 3D spatial distance is below some predefined threshold, can benefit from including as edge weights the actual spatial distances, and by doing so for all pairs of amino acids in the 3D structure rather than only for those pairs that are below the given threshold.So, we model a PSN as a weighted adjacency matrix.Because extracting features from such a matrix is a non-trivial task, we propose a deep learning-based PSC approach that achieves this automatically.More details about our study are as follows: 1. We evaluate nine versions of graphlet features that were already used in the task of unsupervised protein structural comparison 5 , to see how they compare to each other in the task of supervised PSC.Also, we use principal component analysis (PCA) to reduce the dimensionality of graphlet features, as well as of baseline sequence (AACompostion) and non-graphlet network (Existing-all) features 5 , by keeping only the most important information from the given feature.We use the same classification algorithm to learn (train) classification models for each of the above 22 features, in order to fairly compare their performance.Here, as a proof-of-concept, we use a simple yet powerful logistic regression (LR) classifier, whose output indicates, for the given input protein and each class, the likelihood that the protein belongs to the given class.We use LR rather than e.g., simple regression or even potentially more powerful support vector machine (SVM) 4. We compare our top performing graphlet-based approach(es) and our DL approach to two sequence-based approaches (AAComposition and SVMfold) and a non-graphlet network-based approach (Existing-all).We do so fairly, under the same classifiers as discussed above.We compare to AAComposition as a simple baseline sequence approach 5 .We compare to SVMfold as a state-of-the-art sequence PSC approach 6 , which integrates three sets of sequence features and uses SVM as a classification algorithm 6 .We compare against Existing-all as both a baseline and state-of-the-art non-graphlet network approach 5 .Note that Existing-all was already used in the task of unsupervised protein structural comparison but not in our task of supervised PSC.Also, we found two other recent PSC approaches, EnFTM-SVM 19 and PFPA 18 , both sequence-based.While we wanted to include these methods into our study, we could not access their software, as our emails to the authors remained unanswered.Yet, because PFPA uses two sets of sequence features that are both included into SVMfold, PFPA's performance is expected to be at most as high as that of SVMfold.So, a superior performance of our approaches over SVMfold would likely indicate their superior performance over PFPA as well.Further, note per our above discussion, we could not find recent 3D structural approaches that solve the PSC problem as defined in our study.
5. We evaluate the considered approaches on a large set of 9,509 CATH and 11,451 SCOP protein domains.We transform protein domains to PSNs with labels corresponding to CATH and SCOP structural classes, where we study each of the four levels of CATH and SCOP hierarchies 5 .Our evaluation is based on measuring how correctly the trained classification models can predict the classes of unlabeled proteins in the test data using 10-fold cross-validation.
Our key findings are as follows: 1. Our graphlet features are superior to the baseline AAComposition sequence or Existing-all network features in terms of accuracy while being relatively comparable in terms of running time.While the state-of-the-art SVMfold sequence approach performs well (though still inferior to the best of our proposed approaches), SVMfold is orders of magnitude slower than our approaches.In fact, SVMfold is so slow that we were able to run it only on 5.7% of our data.
2. Using PCA on the features often improves performance.
3. Feature integration using the EL framework considerably improves accuracy compared to individual features, though at higher running time.
4. Accounting for edge weights in PSNs via DL achieves accuracy that is relatively comparable to performance of the individual unweighted network-based graphlet methods, though at higher running time (due to using DL).Note that here we are comparing as simple as possible weighted network information (the weighted adjacency matrix) against highly sophisticated unweighted network information (graphlet features, which are the state-of-the-art in network science).So, a comparable accuracy of the former and the latter is promising.

Data and protein structure network (PSN) construction
We use a set of 17,036 proteins that was previously used in a large-scale unsupervised protein structural comparison study 5 .To identify protein domains, we use two protein domain categorization databases: CATH and SCOP.
To construct a PSN from a protein domain, we use Protein Data Bank (PDB) files, which contain information about the 3D coordinates of the heavy atoms (i.e., carbon, nitrogen, oxygen, and sulphur) of the amino acids in the domain.In a PSN, nodes are amino acids of a protein domain and there is an edge between any two nodes if they are sufficiently close in the 3D space.Clearly, given a protein domain, its corresponding PSN construction depends on 1) the choice of atom(s) of an amino acid to represent it as a node in the PSN and 2) a distance threshold between a pair of nodes to capture their spatial proximity.It was recently shown, by considering four different combinations of atom choice and distance threshold definitions (any heavy atom with 4 Å, 5 Å, and 6 Å distance thresholds, and α-carbon with 7.5 Å distance threshold), that the choice of atom and distance threshold does not significantly affect the overall protein structural comparison performance 5 .Hence, we consider only one of these PSN construction strategies in our study.Namely, we define an edge between two amino acids if the spatial distance between any of their heavy atoms is within 4 Å.Following this and other established guidelines 5 , we obtain 9,509 and 11,451 PSNs corresponding to CATH and SCOP, respectively.We use these two data sets in our study.
Given the CATH PSN data, we first test the power of the considered PSC approaches to predict the top hierarchical level classes of CATH: alpha (α), beta (β), alpha/beta (α/β), and few secondary structures.For few secondary structures, none of the CATH PSNs belongs to this class, so we do not consider this class further.Hence, we take all 9,509 CATH PSNs and identify them as a single PSN set, where the PSNs have labels corresponding to three top level CATH classes: α, β, and α/β.
Second, we compare the approaches on their ability to predict the second level classes of CATH, i.e., within each of the top-level classes, we classify PSNs into their sub-classes.To ensure enough training data, we focus only on those top-level classes that have at least two sub-classes with at least 30 PSNs each.Three classes satisfy this criteria.For each such class, we take all the PSNs belonging to that class and form a PSN set, which results in three PSN sets.
Third, we compare the approaches on their ability to predict the third level classes of CATH, i.e., within each of the second level classes, we classify PSNs into their sub-classes.Again, we focus only on those second-level classes that have at least two sub-classes with at least 30 PSNs each.Nine classes satisfy this criteria.For each such class, we take all the PSNs belonging to that class and form a PSN set, which results in nine PSN sets.
Fourth, we compare the approaches on their ability to predict the fourth level classes of CATH, i.e., within each of the third level classes, we classify PSNs into their sub-classes.We again focus only on those third level classes that have at least two sub-classes with at least 30 PSNs each.Six classes satisfy this criteria.For each such class, we take all the PSNs belonging to that class and form a PSN set, which results in six PSN sets.
Thus, in total, we analyze 1 + 3 + 9 + 6 = 19 CATH PSN sets.For further details on the number of PSNs and the number of different protein structural classes in each of the PSN sets, see Supplementary Tables S1-S3.
We follow the same procedure for the SCOP PSN data and obtain 1 + 5 + 6 + 4 = 16 SCOP PSN sets.For more details, see Supplementary Section S1 and Supplementary Tables S1-S3.

Protein features
For each of the protein domains, we extract the following types of protein features that are based on either sequence, network (non-graphlet or graphlet), integration of sequence and network (graphlet), or weighted network.Sequence-based feature.We use a popular baseline sequence feature, AAComposition.Given a protein sequence, AAComposition measures the relative frequency of the 20 types of amino acids: for each amino acid type i, it measures the frequency occurrence of i in the sequence divided by the total number of amino acids in the sequence.Non-graphlet network-based feature.Here, we use a feature that was shown to out-perform many other nongraphlet network-based features in an unsupervised protein comparison task 5 .We denote this feature as Existing-all.Given a PSN, Existing-all calculates and integrates seven different network features: average degree, average distance, maximum distance, average closeness centrality, average clustering coefficient, intra-hub connectivity, and assortativity.Graphlet network-based features.We use nine such features.
Normalized graphlet counts.Since PSNs can be of very different sizes, we use two recent protein features that are based on normalized graphlet counts and that thus account for network size differences 5 .These features are NormGraphlet-3-4 and NormGraphlet-3-5 ; they are normalized equivalents of Graphlet-3-4 and Graphlet3-5, respectively.In particular, given a PSN, in both NormGraphlet-3-4 and NormGraphlet-3-5 feature vectors, a position i represents the total count of graphlets of type i divided by the sum of the counts of all graphlet types.
Ordered graphlet counts.Graphlets capture 3D structural but not sequence information.To integrate the two, ordered graphlets were proposed 17 .These are graphlets whose nodes acquire a relative ordering based on positions of the amino acids in the sequence.Two ordered graphlet features exist: OrderedGraphlet-3 and OrderedGraphlet-3-4 5,17 .There are four 3-node and 42 3-4-node ordered graphlet types 5 .For a PSN, in OrderedGraphlet-3 and OrderedGraphlet-3-4 feature vectors, position i is the total count of ordered graphlets of type i.
In addition, we use two features that are based on normalized counts of ordered graphlets 5 : NormOrderedGraphlet-3 and NormOrderedGraphlet-3-4 ; these are normalized equivalents of OrderedGraphlet-3 and OrderedGraphlet-3-4, respectively.For a PSN, in NormOrderedGraphlet-3 and NormOrderedGraphlet-3-4 feature vectors, position i is the total count of ordered graphlets of type i divided by the total count of all ordered graphlet types.
Although ordered graphlets capture the relative positions of amino acids, they fail to capture how far the amino acids are in a protein sequence.Amino acids that are spatially close but that are far apart in the sequence can be more informative than amino acids that are spatially close just because they are close in the sequence.So, a feature called NormOrderedGraphlet-3-4(K), was proposed 5 .Unlike NormOrderedGraphlet-3-4, NormOrderedGraphlet-3-4(K) counts an ordered graphlet only if every pair of amino acids that are linked by an edge are at least K distance apart in the sequence 5 .
We integrate all 11 features into a new Combined feature.Principal component analysis (PCA)-transformed features.Recently, PCA transformation of protein features, in order to better capture their (dis)similarity, was proposed 5 .Here, we perform the same PCA transformation.For a given PSN set, for each of the above mentioned 11 protein features, we apply PCA to obtain new PCAtransformed features.We pick the first r principal components, such that the value of r is at least two or as low as possible so that it accounts for 90% variation in the data set.Also, we combine the 11 post-PCA protein features as a new post-PCA Combined feature.Weighted network-based feature.We use a weighted adjacency matrix, or distance matrix 23 , of a 3D protein structure as a weighted PSN-based feature representation.In particular, given a protein of length n, we define a weighted adjacency matrix D of size n × n, in which each position D ij contains the minimum 3D spatial distance between the amino acids i and j, where the minimum is taken over all pairwise distances between any heavy atoms of i and j.

The logistic regression (LR) framework
For each of the 35 PSN sets, we train an LR classifier corresponding to each of the 22 different protein features.Hence, for each of the PSN sets, we get 22 different trained LR classifiers.In each of the classifiers, the input is a feature representation of a protein and output is the structural class to which the protein belongs to.Given the n data points in a data set with input feature vectors of size k i.e., f = [f 1 , ..., f k ], we have two vectors β = [β 0 , ..., β k ] T and x = [1, f 1 , ..., f k ] of size k + 1 and the LR classifier which takes the form of: where t =< β, x > is the inner-product of the vectors β and x.The training procedure determines the entries of the vector β by minimizing the cost function ξ(β, x, c), where c is the correct class label for the feature vector f of a certain data point.(1) with optimal parameters and given a certain class produces a value in [0, 1] which represents the probability of the data point belonging to the class.The L 2 -Ridge regularized loss function to be minimized with respect to β is: An unconstrained optimization algorithm minimizes the loss functions in (2) and finds the optimal parameters.The performance of the LR classifier is reported based on the 10-fold cross-validation analysis of the trained classifier.

The ensemble learning (EL) framework
Different protein features provide complementary information in regard to the 3D structure of protein networks therefore, one expects that the classifiers trained on individual feature vectors to be diverse, and that the combination of their decisions should yield high performance for supervised classification of 3D protein structures.Consequently, we propose a hierarchical learning architecture for the supervised classification of 3D protein structures by combining the outputs of different LR classifiers using an EL framework.In particular, we train two such classifiers that correspond to all the 11 LR-based classifiers that are based on pre-PCA protein features (Combined) and all the 11 LR-based classifiers that are based on post-PCA protein features (post-PCA version of Combined), respectively.Hence, for Combined framework, the input is all 11 pre-PCA protein features and for post-PCA Combined framework, the input is all 11 post-PCA protein features.For each of the frameworks, given a protein, the output is the structural class the protein belongs to.Needless to say, our approach is "supervised," which means that we utilize the data points' labels in the training phase to predict the class labels for the test data points in the testing phase.
Our framework provides a fast and efficient decision making process that ensembles the information attained from the LR classifiers each trained by an individual feature vector.We show that our EL framework outperforms all other classifiers and provides a powerful supervised classification platform.In this vein, our EL architecture consists of two levels: 1.At the first level, a set of LR classifiers are trained.This set consists of one LR classifier per feature vector.As described before, each feature vector, depending on its definition, represents a specific property of the protein network structure.The feature vectors of protein network structures are the inputs to our classifiers.We perform 10-fold cross validation in order to report the performance of our EL classifier.In this vein, we divide the input data points into a primary testing fold and a primary training fold.We perform transfer learning in which the feature vectors in the primary training fold are partitioned into two groups of secondary training and testing data points and the LR classifiers are trained by the secondary training data set and tested by the secondary testing partition.To be more specific, the secondary training data set is partitioned into k-folds, where k − 1 of them is used for training by leave-one-out cross validation and the kth fold is used to test the classifier.This means that the decision hypothesis of each classifier is validated using the samples belonging to the kth chunk of the data.This process leads to k trained classifier where each classifier outputs the estimates of the posterior probabilities of the data points belonging to the set not incorporated in the training procedure.These posterior probabilities indicates the probability of an input data point belonging to a particular class.By the end of this procedure, each data point in the secondary training data set will have a set of posterior probabilities assigned to it.The posterior probabilities are placed in a membership class of size 1 × C for all classes, where C is the number of classes.Thereby, each LR classifier maps the variable dimensional feature vectors explained in the previous section to a C dimensional decision space.This mapping trains the LR classifiers to become experts for capturing diverse qualities of different protein network structures.Given the n data points in the secondary training data set, the resulting n vectors of size 1 × C provide the input data points for a specific feature for the second level classifier i.e. for the feature vector i we have which is an n × C matrix.Needless to say, the final input to the second level classifier is the ensemble of all the posterior probabilities given all the feature vectors considered, i.e.Y = [y 1 , ..., y f ] which is an n × (f × C) matrix with f representing the number of features considered.
2. As mentioned the class posterior probabilities obtained at the output of the LR classifiers are concatenated under a new decision space to form the inputs to the secondary classifier which is trained for the final decision.At the second level, the obtained membership vector of size 1 × Cf for each input data point is produced by concatenating the membership vectors obtained from the f base classifiers.The secondary classifier is a support vector machine (SVM) classifier with a linear kernel.We train this classifier using the concatenated membership vectors in Y .For testing the final classifier, first we compute the class membership vectors of the primary test data points using the decision boundaries of the trained classifiers in the first level and feed the 6/14 results into the second level SVM classifier.In our experiment f is 11 for only pre-PCA and only post-PCA analyses.Needless to say, the fully trained EL classifier can be obtained by training all the initial data points, without dividing the data into a primary training and testing folds.
SVM is a supervised learning classifier that seeks to construct a hyper-plane or a set of hyper-planes to separate different classes in higher dimensional spaces by maximizing the margin or the distance between the nearest training data points of any class.This is because in general the larger the margin the lower the generalization error of the classifier is.At the second level, we have n data points Y i of size 1 × (f × C).The hyper-plane separating classes can be written as the set of points satisfying wy + b = 0 where w is the normal vector to the hyper-plane.The separating hyper-plane has the dimension of f × C − 1.As an example, for a two class problem c = {−1, +1} and a data point Y i , the relation given in (3) determines the class membership and the final decision of our EL framework.
The L 2 -Ridge regularized loss function to be minimized with respect to w to obtain the soft-margins which maximize the distance between the boundaries of classes is: In a similar manner as before, an unconstrained optimization algorithm minimizes the loss functions given in ( 4) to find the optimal parameters.To solve ( 2) and ( 4), the optimization problem is solved using a trust region Newton method 24 and for multi-class data sets, we implement the one-vs-the-rest strategy introduced by Crammer and Singer 25 .Further details on the LR and SVM classification methods may be found in 26 .Similar to before, the accuracy for the EL architecture is reported based on the 10-fold cross-validation performance analysis of the trained classifiers.We use the LIBLINEAR package in our software implementation of LR and SVM classifiers 27 .Our results reflect that different graphlet and feature vector structures provide supplementary and complementary information for the secondary classifier leading to an efficient EL framework with high supervised classification accuracy.

The deep learning (DL) framework
In the second part of our study, we design a DL framework that can detect the dominant features and patterns in the 3D structure of proteins directly from their weighted protein networks (distance matrix-representation) and consequently accurately classify the unlabeled data points.For each of the 35 PSN data sets, we train a deep neural network classifier.In this framework, the input is a distance matrix-representation of a protein and the output is the class of the protein.This supervised framework can predict the class of unlabeled proteins with a high precision in any of the data sets defined in Section 2. The deep artificial neural network (ANN) is designed in Python using the Google's TensorFlow package 28 .Our DL framework accepts as its input, the distance matrix-representations U in a vector format (the input matrices representing the 3D protein structures are flattened).The distance matrix-representations, however, have different sizes and our DL architecture only accepts vectors of the same size as input.To overcome this, we utilize the zero-padding approach for dealing with vectors of different sizes commonly used in the computer vision research literature 26 .We equalize the lengths of all input vectors resulting from the flattened distance matrix-representations by padding them with zeros.The DL framework consists of an input layer with the size of the padded distance matrices in vector format, seven hidden layers of sizes [1000, 600, 320, 170, 85, 40, 12] and an output layer of the size equal to the number of classes in the specific data set under investigation.The general model of the DL is defined as: with the DL parameter set θ = [W, B] where W is the collection of weights {W i } i=1:7 , B is the collection of biases {B i } i=1:7 at each neuron and S represents the activation function arctan.The output of each neuron y i may be represented as where k and u j 's indicate the total number of neurons and the neurons' outputs in the previous layer.Depending on the number of classes (i.e. the size of the output layer) m, the last layer produces a collection of values in [0, 1] with each estimating the probability of the input data point belonging to a particular class.This is fulfilled by utilizing the Softmax transformation in the last layer, as follows: where z j 's represent the signals sent to each neuron in the output layer.Hence, we have P (c j |U ) = P (c j |z) = σ(t) j .The final decision is made by assigning the input data point to the class with the highest probability.Consequently, the DL is trained to minimize the objective function J in the presence of an L 2 -Ridge regularization with parameters λ 1 , λ 2 = 0.01 which add stability and robustness to the learning process: min J = arg min The training process is started by "Xavier" weight initiation rule given in 29 .The loss function given in (5) takes the form of cross-entropy and is as follows: where n is the number of training data points, c ij and ĉij are the true and the predicted class labels for a particular input.c ij is equal to 1 only if the data point belongs to class c. ĉij is the output probability that the data point belongs to class c.The optimization method used to minimize the objective function implements Adam algorithm which supersedes the classical stochastic gradient descent procedure as it is both more computationally efficient and robust to the noise 30 .For performance analysis of our deep learning architecture, for a given data set, the data points are randomly divided into two partitions with 80% of data placed in the train data set and 20% of data points placed in the test data sets.The performance analysis for the deep learning architecture is reported based on the error rate on the test data set and represents the accuracy of the trained deep learning framework in correctly classifying the unlabeled test data points.

Results and discussion
Throughout this section, unless stated otherwise, we analyze all 35 considered PSN sets that span all four levels (groups) of CATH and SCOP hierarchies (Section 2.1).For each considered method, we report its accuracy as well as running time.In Section 3.1, we compare the different graphlet features under the LR classifier (Section 2.2.2) to identify the best one(s) for further analyses.In Section 3.2, we examine whether the PCA transformation of the best graphlet feature(s), as well as of the existing baseline AAComposition sequence and Existing-all non-graphlet network features (Section 2.2.1), can improve classification accuracy compared to the corresponding pre-PCA features, all under the LR classifier.Here, we leave out from consideration the existing SVMfold sequence approach 6 , because we were unable to apply this approach to all 35 PSN sets due to its extremely high time complexity.Instead, we consider the SVMfold later on, in a smaller-scope analysis of two of the 35 PSN sets (see below).In Section 3.3, we evaluate whether integration of the considered graphlet features, AAComposition, and Existing-all via EL framework (Section 2.2.3) improves compared to using the individual features under the LR classifier.In Section 3.4, we compare the performance of the sophisticated graphlet-based PSC approaches that deal with unweighted PSNs (Section 2.2.1) to the simple weighted PSN-based feature classification via deep learning (Section 2.2.4).In Section 3.5, we analyze two representative PSN sets on which SVMfold could be run, to compare our proposed approaches to this state-of-the-art existing PSC approach.

Comparison of graphlet features under LR classifier
When we compare all graphlet features under the LR classifier, OrderedGraphlet-3-4 is the most accurate of all pre-PCA graphlet features, while NormOrderedGraphlet-3-4 and NormOrderedGraphlet3-4(K) are the most accurate of all post-PCA graphlet features (Fig. 1).So, for further analyses we keep these three best-performing features.NormOrderedGraphlet3-4(K), i.e., adding the long-range K constraint, improves accuracy of NormOrderedGraphlet-3-4. OrderedGraphlet-3-4, i.e., adding node order, improves upon its regular (non-ordered) counterpart.These results are in alignment with our past work on unsupervised protein comparison 5 , even though in our current study, improvement of NormOrderedGraphlet3-4(K) over NormOrderedGraphlet-3-4 is only marginal.Unlike in our past unsupervised study, in our current study, graphlet feature normalization does not improve upon non-normalized features, and sometimes it actually worsens accuracy.

PCA feature transformation improves PSC accuracy
Here, we consider the following features under the LR classifier: the three top performing graphlet features from Section 3.1, Existing-all, AAComposition, and the integrated Combined feature (Section 2.2.1).When we compare pre-and post-PCA versions of each feature, we find that post-PCA versions are generally more accurate than pre-PCA versions (Fig. 2).Only for Combined, the accuracy is almost tied, with only marginal superiority of its post-PCA version, and for OrderedGraphlet-3-4, its pre-PCA version is superior.The overall benefit of using PCA may be attributed to the post-PCA features having more compact representation, which alleviates the negative effects of the "curse of dimensionality" on classification algorithms.
PCA helps most but not all of the time.So, henceforth, for each feature, we use the best of its pre-and post-PCA versions.

Feature integration via EL improves PSC accuracy
Feature integration under the EL framework (Combined) statistically significantly enhances the classification accuracy compared to training any feature individually under the LR classifier (Fig. 3-4 and Tables 1-2).Hence, it is likely that Combined efficiently utilizes the complementary information from the different individual features.
However, Combined has larger running time compared to the individual feature approaches (Fig. 3).Combined's much larger running time compared to Existing-all, AAComposition, and NormOrderedGraphlet3-4 might be justified because of its much higher accuracy compared to these two features.Moreover, Combined's only slightly larger running time compared to NormOrderedGraphlet3-4(K) might be justified because of its much higher accuracy compared to the latter.However, Combined's much larger running time compared to OrderedGraphlet3-4 might not be justified, because it has only somewhat higher accuracy compared to the latter, especially for the lower-level CATH/SCOP classes (Fig. 3 and Tables 1-2).Table 1.Accuracy of the approaches from Fig. 3 for each CATH group.

Weighted network-based DL classification performs well compared to unweighted graphlet classification
Our proposed DL classifier performs quite well in terms of accuracy (Fig. 3 and Tables 1-2).Specifically, it is significantly superior to AAComposition and Existing-all, and it is comparable to two of the three top performing graphlet features; only OrderedGraphlet3-4 and Combined are significantly better than DL (Fig. 4).Yet, compared to Combined, DL's running time is sometimes much faster (Fig. 3 (a)).These results mean that the DL framework can automatically detect and learn meaningful weighted network features.Importantly, unlike the other individual (LR) or integrative (EL) classifiers that make use of highly sophisticated unweighted network information such as graphlet features, the DL framework utilizes only as simple as possible weighted network information (i.e., weighted adjacency matrix of a network) as its input.This points to a promise of future algorithmic developments for dealing with weighted networks, perhaps even designing weighted graphlet features.

Our approaches improve upon state-of-the-art SVMfold
When can compare to the state-of-the-art SVMfold approach only for two representative PSN sets out of all 35 PSN sets, because of extremely high running time of SVMfold (Table 3).Specifically, we choose CATH-3.20.20 and CATH-3.40.50 from group 4 of the CATH data as the representative PSN sets, for the following reason.These two PSN sets correspond to the fourth level of the CATH hierarchy, i.e., as specific structural classes as possible, which  S4).For each method except DL, the best of its pre-and post-PCA versions is chosen (DL does not have this option).If the latter is selected, "*" is shown next to the given method's name.are the most relevant for applied biochemistry scientists.Also, of all fourth-level PSN sets, these two are the ones in which our proposed approaches perform the best (CATH-3.40.50) -which gives our approaches the best-case advantage over SVMfold, and the worst (CATH-3.20.20)-which gives SVMfold the best-case advantage over our approaches.
SVMfold has high running time because it needs to extract three sets of very comprehensive features from protein sequence information.This complex information retrieval process needs to be performed for each protein in the considered PSN set, which becomes unfeasible when analyzing large PSN sets containing many proteins (such as those at the higher levels of CATH/SCOP hierarchies) or many PSN sets.Furthermore, we note that one of SVMfold's three feature sets actually includes structural class information.That is, SVMfold does not just use class information as labels for training purposes, it also uses this information in the features.We argue that because of this, SVMfold has an invalid circular argument, which artificially inflates SVMfold's accuracy (the label that is to be predicted is already included into the feature based on which the prediction is to be made).This also implies that the SVMfold cannot be applied to the task of classifying proteins with unknown labels, which is the ultimate goal of PSC.
Despite of this bias and unfair advantage of SVMfold, our best approach, Combined, outperforms SVMfold.Also, our individual graphlet features under the LR classifier or our DL approach are comparable to SVMfold in terms of accuracy (DL on CATH-3.20.20 or all graphlet approaches on CATH-3.40.50, Table 4) at a fraction of SVMfold's running time (Table 3).In a comprehensive evaluation, we demonstrate the power of unweighted graphlet-based PSC, weighted network-based deep learning PSC, and data integrative PSC.Specifically, the LR classifier trained by OrderedGraphlet-3-4 feature provides a strong (accurate yet fast) platform for protein classification.Our integrative EL classifier outperforms all of the other considered classifiers in terms of accuracy, including a state-of-the-art classifier called SVMfold, while at a higher computational cost compared to most of the other approaches (except SVMfold, which is by far the slowest of all approaches).However, the much lower running time of the individual LR classifier trained by OrderedGraphlet-3-4 in comparison to the EL classifier suggests that when running time efficiency is required, one should utilize this graphlet-based approach, which still achieves high accuracy.Further, our proposed DL framework, by automatically learning appropriate features from the initial weighted adjacency matrices, yields comparable accuracy.This points to a promising future for algorithms that will rely on weighted network-based attributes of protein 3D structures.

Figure 1 .
Figure1.Accuracy of the 18 pre-and post-PCA graphlet features under the LR classifier, for each of the four hierarchy levels (groups) of CATH, averaged over all PSN sets belonging to the given group (vertical lines are standard deviations).Results are qualitatively similar for SCOP groups as well (Supplementary Fig.S1).

Figure 2 .
Figure 2. Accuracy of pre-and post-PCA versions of the three top performing graphlet features, Existing-all non-graphlet PSN feature, and AAComposition sequence feature under the LR classifier, plus the integrated Combined feature under the EL classification framework, for PSN groups 3 and 4 of CATH.Results for the other groups of CATH and all groups of SCOP are qualitatively similar (Supplementary Fig.S2and Fig.S3).Results are averaged over all PSN sets in the given group (horizontal and vertical lines are standard deviations).

Figure 3 .
Figure 3. Accuracy versus running time of the approaches from Fig. 2 plus deep learning (DL), for groups 4 of CATH and SCOP.Results are qualitatively similar for all other groups of CATH and SCOP (Supplementary Fig.S4).For each method except DL, the best of its pre-and post-PCA versions is chosen (DL does not have this option).If the latter is selected, "*" is shown next to the given method's name.

Figure 4 .
Figure 4. Statistical significance of the accuracy difference of the approaches from Fig. 3, calculated by comparing the approaches' accuracy scores over all 35 PSN sets using the paired t-test.
). Results are averaged over all PSN sets in the given group (horizontal and vertical lines are standard deviations).

Table 2 .
Accuracy of the approaches from Fig.3for each SCOP group.
3, calculated by comparing the approaches' accuracy scores over all 35 PSN sets using the paired t-test.

Table 3 .
Running times (in minutes) of the approaches from Fig.3plus SVMfold, for CATH-3.20.20 and CATH-3.40.50 PSN sets.Due to SVMfold's large time, we could not evaluate it on additional PSN sets.

Table 4 .
Accuracy of the approaches from Fig. 3 plus SVMfold, for CATH-3.20.20 and CATH-3.40.50 PSN sets.Due to SVMfold's large time (Table3), we could not evaluate it on additional PSN sets.