Because
of the fact that 3D modelling and scanning tools are increasingly becoming more
popular, as well as the gradual fame of 3D objects in different domains [1]
[2], researchers’ attention has been drawn to discovering processing techniques
for these 3D objects. The significant motivation behind such research is to
decrease the expense of modelling and processing, which can be achieved through
conceiving indexing and retrieval approaches to make quick and consistent
search engines. The main challenge in 3D object indexing and retrieval process
is to accomplish a robust descriptor by extracting 3D objects’ geometric and
topological features, creating a signature that can distinguish them. The 3D
object indexing and retrieval paradigms can be broadly divided into global
based approaches, and partial based approaches.
Global
based approaches are characterized by the global visual appearance of the 3D
object. They have the capacity to describe the whole 3D object with a single
vector. Nevertheless, they do not have the ability to correctly match the 3D
objects when the disposable data for the latter is incomplete, imperfect, or
corrupted. The partial based approaches can solve the issue of matching
incomplete or imperfect 3D objects, in addition to their capability to supply a
higher level description of a 3D model. In fact, partial based approaches are
based on a thorough analysis of the partials of 3D objects. They are motivated
by the idea that similar 3D objects consist of similar parts, whereas matching
refers to the comparison of 3D objects’ parts, which is ordinarily realized by
reducing a distance measure.
The
proposed partial 3D object indexing and retrieval approach, which can be
applied on both complete and incomplete 3D objects, is based on similarity
computed between the 2D representative slices of each 3D object, transforming
the issue of shape matching between 3D objects into assessing the similarities
between their respective 2D slices. The proposed approach begins by a
normalization stage to ensure that similar 3D objects will be treated the same
way, and consequently, the matching 3D objects will generate similar 2D slices.
Then, for each 3D object, we extract an initial set of 2D slices corresponding
to determined axes. Afterwards, we characterize each 2D slice by a vector of
Zernike moments. Next, we represent the 2D slices of each 3D object in a
transactional database. After, we use the Apriori algorithm and association
rules to select from the initial set of 2D slices the most representative ones.
Finally, we apply our proposed metric to compare the request’s representative
2D slices with those of the database. The present paper is organized as
follows. In second section, we discuss some 3D objects retrieval approaches by
classifying them into two categories, global and partial methods. Section three
discusses the concept of Data Mining. In the fourth part, the proposed approach
is introduced. Experimental results are provided in the fifth section. Finally,
the sixth part presents the conclusion and our future work.
In
the past decade, a great number of content-based techniques for the 3D object
indexing and retrieval have been proposed [3]. In this section, we are
discussing the 3D object descriptors sorted into two groups: global based
approaches and partial based approaches.
Generally
speaking, global based approaches try to describe the 3D object in an overall
way without paying attention to its components which can affect their results
in some instances. Wang et al. [4] proposed a global way to represent the 3D
object using voxels. They introduce NormalNet, a voxel-based convolutional
neural network (CNN) way for 3D object representation and retrieval. The
network employs normal vectors of the object surfaces as entry, manifesting
stronger discriminatory capacity than binary voxels. Bouksim et al. [5]
introduced a new way to train an artificial neural network (ANN) with a
histogram of characteristics (Shape index, dihedral angle, and shape diameter
function), which are extracted directly from the 3D object. After the training
stage, the authors concatenate the hidden layers and used them like a
descriptor in the retrieval process. Luciano et al. [6] used geodesic moments
to propose a new geometric approach for 3D object retrieval. They adopted an
unsupervised approach for learning shape descriptors using sparse autoencoders.
Biasotti et al. [7] presented an approach, a search engine model for dataset
exploration, which is based on multiple similarity criteria between models in
its search. The combination of similarity criteria they are proposing is a
user-driven navigation and similarity assessment. Furthermore, they explored 3D
object collections with different variations of the 3D objects’ properties in
the dataset. They also interceded a combination of similarity criteria for
user-driven navigation and similarity assessment. Yujuan et al. [8] based their
work on the scale-invariant heat kernel signature (SIHKS), to create a
non-rigid 3D shape classification approach using Convolutional Neural Networks
(CNNs).
Researchers’ attention has been drawn
to partial indexing and retrieval 3D objects thanks to the development of 3D
design tools and the wide accessibility of 3D scanners. This interest has been
additionally intensified by the appearance of diverse application areas, for
example,
digital libraries of cultural heritage models
, which demand partial 3D model
indexing and retrieval abilities. In this specific circumstance, the scanned
request can be noisy and rough, on the other hand, it is not clear if we can match
an incomplete 3D object against a complete one, since there is a blank between
their
representations
. In fact, these
representations’
gap complicates the
extraction of a descriptor that will empower a coordination between an
incomplete 3D object and the objects of the same class.
The current partial 3D object retrieval
approaches can be principally
categorized
as:
i) part based, ii) image
based, and iii) bag of visual words based approaches.
Part
based approaches are based on the theory that human being examines the
semantics of objects’ parts to recognize it. In fact, they rely on the
assumption that similar 3D objects consist of similar segments. Arhid et al.
[9] proposed a part based approach by segmenting each 3D object into its
constituent segments, and then for each part a descriptor is computed by the
approach based on a multi-criteria employing data envelopment analysis (DEA).
In order to calculate the similarity between 3D objects, the authors applied a
new technique to compare resemblances between descriptors of 3D object’s parts.
Agathos et al. [10] introduced a novel retrieval approach for articulated
objects. The proposed method is composed of a segmentation step, which produces
the 3D object’s Attributed Relation Graph (ARG), and then the Earth Movers
Distance (EMD) is used to measure the similarity between two ARGs. Tierny et
al. [11] took power of Reeb graph theory to ameliorate both the 3D object
description and comparison procedures. Indeed, the authors represented each 3D
object by a Reeb graph linked with geometrical signatures. The similarity
measured between two 3D objects is examined by calculating an alternative of
their maximum communal sub-graph.
2D
image based approaches try to represent the 3D object by a set of 2D images,
and then use them in the indexing and retrieval process. Chen et al. [12]
proposed the Light Field descriptor, which is considered one of the earliest 2D
image based approaches. The authors used Fourier coefficients and Zernike
moments to describe an ensemble of 2D images captured at the vertices of a
dodecahedron. Papadakis et al. [13] presented PANORAMA descriptor. It consists
of the 2D Discrete Wavelet Transform and 2D Discrete Fourier Transform computed
on an ensemble of panoramic images of a 3D object, which is characterized by
the orientation and position of the surface of the 3D object in space. Liu et
al. [14] designed a multi-view latent variable model ( MVLVM) to have an
undirected graph structure in which the 3D object’s image set is treated as the
observations from which to acquire the latent visual and spatial contexts.
Furthermore, they expound the learning and conjecture process of MVLVM for 2D
image based 3D object retrieval. Ouazzani taybi et al. [15] started by
extracting for each 3D object a set of 2D slices
corresponding
to
its three main axes, and then they use the K-means clustering method to select
the representative ones, which transform the comparison between 3D objects into
similarity computing between their 2D slices. This approach produces
satisfactory results if the number of clusters is correctly chosen. Otherwise,
the clustering step generates over-partition or under-partition. In order to
remedy this problem the authors in [16] used a cluster validity index to adapt
the number of clusters to the complexity of each 3D object.
The
bag of visual words based approach has been effectively applied on 3D object
indexing and retrieval approaches. It has exhibited fruitful applications in
either geometry based or image-based approaches. Likewise, it has clear focal
points in partial based approaches. The typical example of application of the
bag of visual words system into image-based approach is the method of Furuya et
al. [17], the authors used the bag of visual words method to encode the Scale
Invariant Feature Transform (SIFT) characteristics of an ensemble of depth images
of a 3D object into a histogram. Laga et al. [18] approach accomplished an
outstanding performance in partial based approaches by applying the bag of
visual words method to the Laplace-Beltrami spectrum characteristics of a collection
of evenly sampled points on the 3D object’s surface by projecting the geometry
onto the Laplace-Beltrami operator’s eigenvectors.
Data
mining, also called knowledge discovery in databases, is an important research
domain in computer science, it is widely used in business (insurance, retail,
banking, credit card fraud detection system), science research (medicine,
astronomy, biological data analysis), and government security (detection of
criminals and terrorists). One of the most important DM tasks is to find the association
rules and to discover the interesting and useful patterns and relationships in
large volumes of data [19].
At
first, the association rule theory was widely utilized for marketing aims, but
it could also be used in different domains of research such as the searching of
frequent values, pairs or co-occurrences if the data set is in conformity with
this research, as Hébrail et al. [20] posited. The classic method for
resolving association rule issue is Apriori Algorithm proposed by Agrawal et
al. [21]. In fact, the use of this algorithm in Data Mining allows
the
examination of the diverse feasible mixture of the items to discover likely
relationships, which will be formulated as association rules. Following the
original definition, the issue of association rules mining is presented as:
Let
be a set of
binary attributes called
items. Let
be a set of transactions composing
the database. Each transaction in
is characterized by a
unique transaction ID and contains a subset of the items in
. A rule is defined as an
implication in the form
where
and
.
The sets of items (briefly item sets)
and
are, respectively, called
antecedent (left-hand side or LHS) and consequent (right-hand side or RHS) of
the rule [22].
Several
metrics can be used to measure the power of an association rule, the most
frequently employed are support and confidence. The support is detailed as the
proportion of the number of transactions included the antecedent
and the consequent
in a dataset
. The confidence is defined
as the probability of finding
in transactions under the
condition that these transactions also contain
.
|
(1)
|
|
(2)
|
A rule will be accepted as an
association rule, if its support and confidence satisfy a user-specified threshold
(minsup and minconf).
In this section, we introduce our
method to index and retrieve 3D objects. The principal idea of our approach is
to represent the 3D objects by an ensemble of 2D slices transforming the
shape-matching issue between 3D objects into measuring the similarity between
their 2D slices. Fig. 1 shows the architecture of the proposed approach. First,
we start by 3D objects normalization to assure invariance under scaling,
translation, and rotation. Second, for each 3D object, we extract an initial
set of 2D slices corresponding to determined axes. Next, we describe each 2D
slice by a vector of Zernike moments. Then
, we
represent the 2D slices of each 3D object in a transactional database.
Thereafter, we use the Apriori algorithm to select from the initial set of 2D
slices the most representative ones. Finally, the similarity metric is proposed
to measure the similarity between the 3D objects’ representative 2D slices.
Generally,
3D
objects are given in random positions, orientations and scales in the 3D space.
In many feature extraction process, it is
necessary
to normalize the 3D object’s orientation and size before feature extraction to
guarantee a distinctive representation. Indeed, the normalization stage aims to
assure
that the similar 3D objects with different positions, orientations and scales
can be
correctly
represented by almost
the same feature descriptors.
Therefore,
to ensure the invariance characteristics of our descriptor, which correlates
with putting the 3D object into a canonical coordinate, we translate the 3D
object’s centre of mass to coincide with the origin. To address the scale
normalization, the average distance of the surface of the 3D object from its
centroid is equal to 1. The Principal Component Analysis (PCA) is used to
achieve the rotation normalization.
Our approach consists of
creating a set of 2D slices gotten by the intersection of an ensemble of plans
with the 3D triangle mesh. In fact, the triangle meshes provide an efficient
way to represent 3D objects. Characteristically, geometry, connectivity and
property data are at a time used to represent a 3D triangle mesh. So as to
create the initial set of 2D slices, we take the intersection of the 3D
triangle mesh with plans equally spaced and orthogonal to the determined axes.
Fig. 2 shows an example of a 3D object, at a given position, with its 2D slices
corresponding to its Y-axis using our approach.
At
the outset, we take, for each Cartesian axes (X-, Y-, and Z-axis),
the intersection of the normalized 3D object with 50 plans equally spaced and
orthogonal to the axis.
|
Fig. 1. The architecture of our proposed approach.
|
Then, we turn the 3D object
on the three Cartesian axes (axis by axis) by 20° until we reach the 160°. At
each rotation of the 3D object, we capture, for only two Cartesian axes, the
intersection of the turned 3D object with 50 plans equally spaced and
orthogonal to the axis (as Fig. 1 shows); i.e. if we turned the 3D object on
the X-axis, we extract the 2D slices corresponding to the Y- and Z-axis.
In
fact, when we turn the 3D object on an axis, the slices corresponding to this
axis remain the same as the slices corresponding to the same axis in the first position;
there is only a rotation change. Since we will use the Zernike descriptor,
which is a rotation invariant descriptor, to
characterize
the 2D slices, it is wise to eliminate the slices corresponding to the 3D
object's rotation axis.
|
|
(a)
|
(b)
|
Fig. 2. Example of a 3D object (a) with its 2D slices
corresponding to its Y-axis using our approach.
|
Amid
the large image descriptors existing in the literature, the moments of Zernike
are deemed to be the most appropriate descriptors to represent the 2D slices,
on account of its distinctive features such as rotation invariance, small
feature size, fine image representing capacity etc. Zernike moments have been
successfully used in diverse image analysis and object recognition [23] [24]
[25] [26]. Zhang and Lu [27] observed that the Zernike moments are very useful
for capturing the main characteristics of images. This is due to the fact that
these moments are orthogonal in nature, which ensures that moment values at
different orders accounts for independent and unique features of an image.
In
our method, only low order Zernike moments have been extracted from 2D slices.
As a matter of fact, the low order Zernike moments are more able to characterize
the image’s gross information versus high order moments which represent the
detail, and also they are fewer vulnerable to noise. As well, the magnitudes of
Zernike moments’ values are rotation invariant, which allows us to eliminate 2D
slices corresponding to some axes, and better classify the extract 2D slices.
Now
that we have each initial 2D slice characterized by a set of Zernike Moments,
we use the Apriori algorithm to select the 2D representative slices. To achieve
this task, we represent the 2D slices of each 3D object in a transactional
database. Therefore, each row (transaction) in the transactional
database
agrees with
t
he 2D slices corresponding to an extraction axis.
In
order to label the initial set of 2D slices in the transactional database, we
use the cluster validity index proposed by Do et al. [28] to automatically
define the optimal number of cluster according to the 3D object’s complexity.
In fact, clusters structures can have one of three states: under-partition
state (K < K*), optimal partition state (K = K*) or over-partition state (K
> K*). It is possible to find the optimal number of clusters using two measures:
mean intra-cluster distance (MICD) and minimum inter-cluster distance (ICMD).
The MICD of the i
th
cluster
MICD
i
is defined by:
|
(3)
|
Where
,
and
respectively represent the
data set of the i
th
cluster, the centroid of the i
th
cluster and the number of data in the i
th
cluster.
|
(4)
|
Where
and
respectively represent the
centroid of the i
th
and j
th
cluster.
Let
be
a finite data set, and let
be
a
centroid,
each
characterizes one of the
clusters. The
under-partition measure
and over-partition measure
, respectively defined by
Eq. (5) and Eq. (6), have different scales depending on the structure and the
number data. Thus, a normalization of these functions is necessary.
|
(5)
|
|
(6)
|
For 2≤K≤K
max
.
Let us define partition measure
vectors as:
|
(7)
|
|
(8)
|
For each vector, maximum and minimum
values are computed as:
|
(9)
|
|
(10)
|
|
(11)
|
|
(12)
|
For
The normalization of each function
becomes:
|
(13)
|
|
(14)
|
Therefore
and
always lies between 0 to
1. As a result, normalization partition measure vectors are defined as:
|
(15)
|
|
(16)
|
The validity index, noted by
, is formulated by adding
and
, thus is written as:
|
(17)
|
The optimal number of group is
obtained for the smallest value of
) for
varying from 2 to
. In our method, the
application of this cluster validity index, taking the interval [2, 500],
allows us to fully determine the optimal number of cluster depending on the
complexity of the 3D object.
Now that we have determined the
optimal number of cluster depending on the 3D object’s complexity, we assign an
identical label to the 2D slices that are in the same cluster. Next, in each
row of the transactional database, we diminish the number of items (which
correspond to the 2D slices) to the minimum by eliminating the redundancies.
To extract the representative 2D
slices, we use the transactional database of each 3D object, and we apply the
Apriori algorithm to extract the association rules, which will be used to
determine the representative slices. After some experiments that we made to determine
the suitable minsup and minconf, we conclude that it is proper to choose 25%
and 90% as, respectively, minsup and minconf threshold.
The aim of similarity measurement is
to maintain the smallest possible distances for similar objects, and to make
dissimilar objects as far as possible in the feature space. Therefore, the
suitable similarity measurements should be designed to compute accurately the
content similarity.
In our
approach,
we have represented each 3D object by a set
of characteristic slices, transforming the issue of shape-matching between 3D
objects into how to compute the similarity between their representative 2D
slices. Therefore, one-to-one correspondence was turned to many-to-many
correspondence.
Amongst the existing
many-to-many
distance measurements, Hausdorff distance has
manifested its efficiency and powerful in the current
retrieval works
[29]
[30].
Let us consider two sets
and
, the Hausdorff
distance
computes the
level of mismatch between
and
by computing
the distance of the point of
which is furthest
from any point of B and vice versa. In fact, the Hausdorff
pay particular
attention to the dissimilarity of the two sets, but that can conduct to
inappropriate results when some troubled components existed in a set. For instance,
assume that all elements of A and B have a strong similarity except one pair
that is different. The Hausdorff distance will ignore all similar elements by
taking into consideration only the dissimilarity between the more different
pair.
In order to overcome the
Hausdorff weakness, we define our metric based on Hausdorff distance, and take
into consideration the dissimilarity between all pairs in the set. Therefore,
the dissimilarity between the
representative
slices of object
and
representative
slices of query
is defined as:
|
(18)
|
Where
accounts for
the Euclidean distance between the
representative
slice of
and the
representative
slice of
.
In this study, the 3D objects of the
Princeton Shape Benchmark (PSB)
database
are used to evaluate our approach
;
this database is freely available online and widely used in many works. It
contains 1814 3D objects collected from the internet, and classified by humans
according to function and form. It includes a set of hierarchical
classifications, separate training and test sets, annotations for each model,
and a suite of software tools for generation, analysis, and visualization of
shape matching results [31].
In
order to investigate the performance of our approach, we compared it to 12 3D
indexing and retrieval approaches used in the PSB (D2 Shape Distribution (D2)
[32], Extended Gaussian Image (EGI) [33],Complex Extended Gaussian Image (CEGI)
[34],Shape Histogram (SHELLS) [35], Shape Histogram (SECTORS)[35],Shape
Histogram(SECSHEL) [35], Spherical Extent Function (EXT)[36], Radialized
Spherical Extent Function (REXT) [37], Gaussian Euclidean Distance Transform
(GEDT) [38], Spherical Harmonic Descriptor (SHD) [38], and Light Field
Descriptor (LFD) [12]).
To
objectively examine our approach, we utilized the PSB’s evaluation tools with
respect to the base classification. As a matter of fact, the benchmark
evaluation tools generate visualizations (Precision-recall plot, Tier image,
and the top five retrieval results), and statistics (Nearest neighbor (NN),
First-tier (FT) and Second-tier (ST), E-Measure (EM), Discounted Cumulative
Gain (DCG), and Normalized Discounted Cumulative Gain (N-DCG)) to facilitate
the comparison of 3D object indexing and retrieval approaches. We invite the
reader to consult [31], which provides more details on the evaluation
criterion.
Tab.
1 summarizes the retrieval statistics for each method. LFD slightly
outperformed our approach in ST (48.7% vs. 48.6%), and E-Measure (28.0% vs.
27.7%). However, our approach gives the best score in closest match metrics (NN
(74.0%), FT (39.6%), DCG (66.8%), and N-DCG (23.5%)), which means that our
method is the best one in placing the right matches at the top of the retrieval
list.
Fig.
3 shows recall-precision curves for each descriptor. As we can observe, the
recall-precision curves demonstrates that our approach outperforms the compared
methods and confirms the retrieval statistics shown in Tab. 1. Additionally,
when the recall increases, the whole curve of our approach decreases slowly
compared to the other descriptors, which means that our method is more stable.
Fig.
4 presents image visualizing nearest neighbor (white), first tier (yellow), and
second tier (orange) matches using our approach in the PSB database. A strong
retrieval approach should have a group of white-yellow pixels in the
class-sized blocks along the diagonal. As we can notice in Fig. 4, our method
has brighter pixels in the diagonal class-sized blocks showing that the 3D
objects within the same class present higher similarity.
Fig.
5 demonstrates a portion of the retrieval 3D objects on the test set of the PSB
using our approach. The first column in the figure shows the 3D objects queries
and the rest of the columns present the 10 retrieved 3D objects in rank order.
As can be found from the outcomes acquired by our method, practically all the
retrieved 3D models appertain to the query object’s class.
Table 1.
Retrieval performances
using our approach and
those used in the PSB
|
Shape descriptors
|
NN
|
FT
|
ST
|
E-Measure
|
DCG
|
N-DCG
|
Our approach
|
74.0%
|
39.6%
|
48.6%
|
27.7%
|
66.8%
|
23.5%
|
LFD
|
65.7%
|
38.0%
|
48.7%
|
28.0%
|
64.3%
|
18.9%
|
REXT
|
60.2%
|
32.7%
|
43.2%
|
25.4%
|
60.1%
|
11.1%
|
SHD
|
55.6%
|
30.9%
|
41.1%
|
24.1%
|
58.4%
|
8.0%
|
GEDT
|
60.3%
|
31.3%
|
40.7%
|
23.7%
|
58.4%
|
8.0%
|
EXT
|
54.9%
|
28.6%
|
37.9%
|
21.9%
|
56.2%
|
3.9%
|
SECSHEL
|
54.6%
|
26.7%
|
35.0%
|
20.9%
|
54.5%
|
0.8%
|
VOXEL
|
54.0%
|
26.7%
|
35.3%
|
20.7%
|
54.3%
|
0.4%
|
SECTORS
|
50.4%
|
24.9%
|
33.4%
|
19.8%
|
52.9%
|
-2.2%
|
CEGI
|
42.0%
|
21.1%
|
28.7%
|
17.0%
|
47.9%
|
-11.4%
|
EGI
|
37.7%
|
19.7%
|
27.7%
|
16.5%
|
47.2%
|
-12.7%
|
D2
|
31.1%
|
15.8%
|
23.5%
|
13.9%
|
43.4%
|
-19.7%
|
SHELLS
|
22.7%
|
11.1%
|
17.3%
|
10.2%
|
38.6%
|
-28.6%
|
|
Fig. 3.
Average precision-recall curves
|
In
order to investigate the stability of our proposed method against the
incomplete 3D objects, additional experiments have been realized. In fact, we
have created a set of incomplete 3D objects using meshLab software by arbitrarily
removing parts of 3D objects, and we have used them as queries. Fig. 6 demonstrates
some retrieval examples using our method. The first column exposes 7 incomplete
3D objects queries, and each row exposes the top 10 retrieval results using our
approach. From the obtained results, we can deduce that our approach performs
well, and succeeds to correctly match the incomplete 3D objects.
In this paper, we introduced a new partial 3D object indexing and
retrieval approach combining 2D slices and Apriori algorithm. The principal
idea of our work was to take advantage of 2D slices and Data mining algorithms
to upgrade the 3D shape description for both complete and incomplete 3D
objects. In fact, we used the Apriori algorithm to choose from an initial set
of 2D slices, which are extracted from the 3D object, the most representative
ones, and then use them to describe the 3D object. Extensive experiments have
shown that our approach gives effective results in terms of retrieval performances,
and hence outperforming some of the well known methods in the literature.
|
Fig .4
Tier
image visualizing nearest neighbor (white), first tier (yellow), and second
tier (orange) computed by matching every 3D object (rows) with every other 3D
object (columns) in the PSB database using our approach.
|
Query
|
Top ten 3D objects
retrieved
|
|
Fig. 5. Top 10 retrieved
3D objects using the proposed approach with normal query
|
Query
|
Top ten 3D objects
retrieved
|
|
Fig. 6. Top
10 retrieved 3D objects using the proposed approach
with incomplete query
|
In future studies, we intend to utilize a multi-agent system to
make our approach less computationally expensive
. Also, we will continue to adhere to
partial 3D object retrieval approaches by taking advantage of the power of deep
learning.
[1]
S. Zhao, L. Chen, H. Yao, Y. Zhang, and X. Sun, “Strategy for dynamic 3d depth
data matching towards robust action retrieval,” Neurocomputing, vol. 151, pp.
533–543, 2015.
[2]
J. Cheng, W. Bian, and D. Tao, “Locally regularized sliced inverse regression
based 3d hand gesture recognition on a dance robot,” Information Sciences, vol.
221, pp. 274–283, 2013.
[3]
G. L. Lòpez, A. P. P. Negròn, A. D. A. Jimènez, J. R.
Rodrìguez, and R. I. Paredes, “Comparative analysis of shape descriptors
for 3d objects,” Multimedia Tools and Applications, vol. 76, no. 5, pp.
6993–7040, 2017.
[4]
C. Wang, M. Cheng, F. Sohel, M. Bennamoun, and J. Li, “Normalnet: A voxel-based
cnn for 3d object classifi-cation and retrieval,” Neurocomputing, vol. 323, pp.
139–147, 2019.
[5]
M. Bouksim, k. Arhid, F. R. Zakani, M. Aboulfatah, and t. Gadi, “New approach
for 3d mesh retrieval using artificial neural network and histogram of
features,” Scientific Visualization, vol. 10, pp. 84–94, 2018.
[6]
L. Luciano and A. B. Hamza, “A global geometric framework for 3d shape
retrieval using deep learning,” Computers & Graphics, vol. 79, pp. 14–23,
2019.
[7]
S. Biasotti, E. M. Thompson, and M. Spagnuolo, “Context-adaptive navigation of
3d model collections,” Computers & Graphics, vol. 79, pp. 1–13, 2019.
[8]
Y. Wu, H. Li, Y. Du, and Q. Cai, “Non-rigid 3d shape classification based on
low-level features,” in Proceedings of 2018 Chinese Intelligent Systems
Conference. Springer, 2019, pp. 651–659.
[9]
K. Arhid, F. R. Zakani, B. Sirbal, M. Bouksim, M. Aboulfatah, and T. Gadi, “A
novel approach for partial shape matching and similarity based on data
envelopment analysis,” , vol. 43, no. 2, 2019.
[10]
A. Agathos, I. Pratikakis, P. Papadakis, S. Perantonis, P. Azariadis, and N. S.
Sapidis, “3d articulated object retrieval using a graph-based representation,”
The Visual Computer, vol. 26, no. 10, pp. 1301–1319, 2010.
[11]
J. Tierny, J.-P. Vandeborre, and M. Daoudi, “Partial 3d shape retrieval by reeb
pattern unfolding,” in Computer Graphics Forum, vol. 28, no. 1. Wiley Online
Library, 2009, pp. 41–55.
[12]
D.-Y. Chen, X.-P. Tian, Y.-T. Shen, and M. Ouhyoung, “On visual similarity
based 3d model retrieval,” in Computer graphics forum, vol. 22, no. 3. Wiley
Online Library, 2003, pp. 223–232.
[13]
P. Papadakis, I. Pratikakis, T. Theoharis, and S. Perantonis, “Panorama: A 3d
shape descriptor based on panoramic views for unsupervised 3d object
retrieval,” International Journal of Computer Vision, vol. 89, no. 2-3, pp.
177–192, 2010.
[14]
A.-A. Liu, W.-Z. Nie, and Y.-T. Su, “3d object retrieval based on multi-view
latent variable model,” IEEE Transactions on Circuits and Systems for Video
Technology, vol. 29, no. 3, pp. 868–880, 2018.
[15]
I. Ouazzani Taybi, R. Alaoui, F. R. Zakani, K. Arhid, M. Bouksim, and T. Gadi,
“A novel efficient 3d object retrieval method based on representative slices,”
in Multimedia Computing and Systems (ICMCS), 2016 5th International Conference
on. IEEE, 2016, pp. 639–644.
[16]
I. Ouazzani Taybi, M. Bouksim, R. Alaoui, and T. Gadi, “A novel partial 3d
object retrieval method using adaptive slices clustering.”
International
Journal of Intelligent Engineering and Systems, Vol.12, No.1, 2019.
[17]
T. Furuya and R. Ohbuchi, “Dense sampling and fast encoding for 3d model
retrieval using bag-of-visual features,” in Proceedings of the ACM
international conference on image and video retrieval. ACM, 2009, p. 26.
[18]
H. Laga, T. Schreck, A. Ferreira, A. Godil, I. Pratikakis, and R. Veltkamp,
“Bag of words and local spectral descriptor for 3d partial shape retrieval,” in
Proceedings of the Eurographics Workshop on 3D Object Retrieval (3DOR11).
Citeseer, 2011, pp. 41–48.
[19]
A. Dahbi, S. Jabri, Y. Ballouki, and T. Gadi, “A new method to select the
interesting association rules with multiple criteria,” Int. J. Intell. Eng.
Syst, vol. 10, no. 5, pp. 191–200, 2017.
[20]
G. Hèbrail and Y. Lechevallier, “Data mining et analyse des
données,” Govaert G. Analyse des données.
Ed. Lavoisier, Paris, pp.
323–355, 2003.
[21]
R. Agrawal, R. Srikant et al., “Fast algorithms for mining association rules,”
in Proc. 20th int. conf. very large data bases, VLDB, vol. 1215, 1994, pp.
487–499.
[22]
G. DAngelo, S. Rampone, and F. Palmieri, “Developing a trust model for
pervasive computing based on apriori association rules learning and bayesian
classification,” Soft Computing, vol. 21, no. 21, pp. 6297–6315, 2017.
[23]
C.-W. Tan and A. Kumar, “Accurate iris recognition at a distance using stabilized
iris encoding and zernike moments phase features,” IEEE Transactions on Image
Processing, vol. 23, no. 9, pp. 3962–3974, 2014.
[24]
H. Dai and S. Liao, “Central-symmetrical property analysis on circularly
orthogonal moments,” Journal of Theoretical and Applied Computer Science, vol.
8, no. 2, pp. 11–26, 2014.
[25]
C. Singh et al., “Improving image retrieval using combined features of hough
transform and zernike moments,” Optics and Lasers in Engineering, vol. 49, no.
12, pp. 1384–1396, 2011.
[26]
X. Gao, Q. Wang, X. Li, D. Tao, and K. Zhang, “Zernike-moment-based image super
resolution,” IEEE Transactions on Image Processing, vol. 20, no. 10, pp.
2738–2747, 2011.
[27]
D. Zhang and G. Lu, “Review of shape representation and description
techniques,” Pattern recognition, vol. 37, no. 1, pp. 1–19, 2004.
[28]
K. Do-Jong, P. Yong-Woon, and P. Dong-Jo, “A novel validity index for
determination of the optimal number of clusters,” IEICE Transactions on
Information and Systems, vol. 84, no. 2, pp. 281–285, 2001.
[29]
S. Zhao, H. Yao, Y. Zhang, Y.Wang, and S. Liu, “View-based 3d object retrieval
via multi-modal graph learning,” Signal Processing, vol. 112, pp. 110–118,
2015.
[30]
Y. Gao, M. Wang, R. Ji, X. Wu, and Q. Dai, “3-d object retrieval with hausdorff
distance learning,” IEEE Transactions on industrial electronics, vol. 61, no.
4, pp. 2088–2098, 2013.
[31]
P. Shilane, P. Min, M. Kazhdan, and T. Funkhouser, “The princeton shape
benchmark,” in Shape modeling applications, 2004. Proceedings. IEEE, 2004, pp.
167–178.
[32]
R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, “Matching 3d models with
shape distributions,” in Shape Modeling and Applications, SMI 2001
International Conference on. IEEE, 2001, pp. 154–166.
[33]
B. K. P. Horn, “Extended gaussian images,” Proceedings of the IEEE, vol. 72,
no. 12, pp. 1671–1686, 1984.
[34]
S. B. Kang and K. Ikeuchi, “Determining 3-d object pose using the complex
extended gaussian image,” in Proceedings. 1991 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition. IEEE, 1991, pp. 580–585.
[35]
M. Ankerst, G. Kastenm¨uller, H.-P. Kriegel, T. Seidl et al., “Nearest
neighbor classification in 3d protein databases.” in ISMB, vol. 99, 1999, pp.
34–43.
[36]
D. Saupe and D. V. Vrani´c, “3d model retrieval with spherical harmonics
and moments,” in Joint Pattern Recognition Symposium. Springer, 2001, pp.
392–397.
[37]
D. V. Vranic, “An improvement of rotation invariant 3d-shape based on functions
on concentric spheres,” in Proceedings 2003 International Conference on Image
Processing (Cat. No. 03CH37429), vol. 3. IEEE, 2003, pp. III–757.
[38]
M. Kazhdan, T. Funkhouser, and S. Rusinkiewicz, “Rotation invariant spherical
harmonic representation of 3D shape descriptors,” in Symposium on geometry
processing, vol. 6, 2003, pp. 156–164.