Image Starmap Search

Contact: Parent Website:

March 2016

1.0 Introduction

Key Idea:

·      Every image is a starmap of features points.

·      If we can read starmap like poetry, we can search billions of images and sub-images just like how we search words documents.

1.1 Pre-Art and Problem of Image Search:

·       Bag-of-words search of image features and image meta-data are commonly used.

·       Its over-relaxation to query constraints causes too many weak search results return by the search engines.

·       It can only do whole image search and cannot do sub-image search.

1.2 Our Solution to Image Search:

·       The core is our patent-pending technology of position-specific, controlled tolerances, image features starmaps spans search.

·       It transforms images space into image features starmaps space using commonly used image features detectors and descriptors extractors. Each starmap has a number of stars. Each star has a position in the source image and a numbers of description values.

·       Each feature point (i.e., star) is treated as a searchable token with token string and position. The token string is a serialization of star’s description values. All the position-sorted tokens for an image become the starmap representation of the image to index and to search.

·       It can search sub-images in the same way as whole image. The search engines return matching images and positions of the search target image in the matching images.

·       It uses two levels of indices: i) tokens-to-document index and ii) token-attributes-to-token index. The first index is for starmap spans search and the second index is token suggestions (i.e., similar stars) search.

·       More search relaxation features are supported: i) allowing missing attribute values, ii) allow missing stars of target starmap, iii) allow tolerances to the distances between the target stars.

·       It can detect scale of target image versus the matching image.

·       It can take into consideration of image meta data as search constraints.

In the following sections, we will present how this can be done in three sections:

·       The Searchable – Image Starmaps Generated By OpenCV Image Features Detectors and Descriptors Extractors

·       The Search Framework – ElasticSearch/Lucene With Image Starmap Search Plugin

·       The Core Matching Algorithms – Starmap Spans Query With User-Controlled Tolerances


2.0 The Searchable – Image Starmaps Generated By OpenCV Image Features Detectors and Descriptors Extractors

Even though the pre-art of image search is very primitive, the arts and technologies of image features detection and description are very strong by inheriting haft century of research and software works.   

Among the others, OpenCV (Open Source Computer Vision, see,provides powerful and convenient tools for image process, image features detection and description. We use their results and tools as a basis to our image search engine. Namely, convert images from image space to image starmap spaces to perform our position-specific, controlled-tolerances image starmap index and search.

2.1 Image Feature Detectors and Descriptors Extractors

There are many image Feature detectors and descriptors extractors available for different features and different uses. Typically,

Popular Features Detectors include:

·       SURF

·       SIFT

·       FAST

·       ORB

·       STAR

·       MSER

·       GFTT

·       HARRIS

·       Dense

·       SimpleBlob

·       Etc

Popular Feature Descriptor Extractors include:

·       BRIEF

·       SIFT

·       SURF

·       ORB

·       BRISK

·       FREAK

·       etc

We simplify choose a pair of features detector and descriptors extractor and run the corresponding OpenCV programs. As a result, a number of features points and their descriptors are return to us, like:

       -- k=0, point=[x=48,y=70], descriptors=[79, 138, 144, 149, 243, 148, 58, 101, 205, 148, 35, 21, 17, 249, 113, 41, 19, 19, 68, 44, 190, 113, 60, 78, 243, 35, 157, 133, 4, 36, 255, 144]

       -- k=1, point=[x=65,y=96], descriptors=[78, 72, 197, 190, 215, 22, 30, 105, 9, 133, 39, 116, 165, 250, 161, 40, 17, 141, 197, 60, 111, 99, 32, 78, 76, 37, 10, 26, 10, 40, 163, 210]

       -- k=2, point=[x=157,y=31], descriptors=[79, 23, 185, 70, 235, 140, 177, 252, 109, 20, 236, 126, 233, 189, 114, 107, 35, 29, 235, 122, 191, 240, 63, 22, 251, 150, 121, 215, 155, 15, 61, 184]

       -- k=3, point=[x=157,y=32], descriptors=[79, 9, 152, 86, 179, 151, 181, 237, 237, 21, 224, 28, 121, 173, 50, 107, 51, 181, 203, 168, 191, 125, 190, 30, 126, 17, 188, 195, 56, 39, 50, 242]

       -- k=4, point=[x=77,y=124], descriptors=[103, 185, 241, 64, 208, 204, 51, 34, 229, 93, 196, 95, 71, 113, 223, 119, 61, 5, 161, 72, 255, 113, 54, 215, 50, 38, 219, 132, 5, 183, 177, 136]

       -- k=5, point=[x=78,y=124], descriptors=[215, 57, 251, 3, 216, 76, 153, 154, 226, 57, 196, 127, 207, 119, 126, 223, 168, 77, 173, 96, 209, 242, 53, 81, 50, 134, 91, 20, 211, 215, 209, 8]

       -- k=6, point=[x=176,y=33], descriptors=[79, 75, 137, 166, 172, 47, 149, 232, 104, 130, 79, 12, 171, 188, 162, 8, 146, 154, 239, 96, 239, 43, 42, 58, 237, 144, 79, 88, 127, 89, 143, 243]

       -- k=7, point=[x=137,y=78], descriptors=[79, 138, 145, 149, 243, 150, 62, 103, 77, 213, 39, 21, 177, 249, 121, 43, 19, 19, 68, 44, 190, 101, 60, 78, 223, 35, 156, 133, 4, 44, 255, 144]

       -- k=8, point=[x=178,y=100], descriptors=[79, 9, 144, 214, 179, 151, 181, 237, 109, 21, 106, 12, 49, 237, 48, 107, 51, 180, 203, 168, 191, 253, 182, 30, 252, 48, 188, 195, 26, 33, 59, 186]

       -- k=9, point=[x=181,y=111], descriptors=[102, 234, 205, 171, 23, 68, 86, 99, 9, 197, 23, 215, 175, 114, 203, 42, 85, 133, 133, 126, 109, 103, 0, 95, 12, 39, 206, 29, 69, 56, 167, 208]

       . . .

If we draw the features points to the image, we will get Figure 1. In this fugure, if you ignore the original picture, you see a starmap just like you shoot a picture to evening sky.  Image search problem become to search a target starmap over billion of indexed image starmaps.

Figure 1. Sample Image Feature Points (white points).


2.2 Generate Tokens From Image Features Points and Their Descriptors

Next, we convert features points and their descriptors into searchable format. Namely, represent each feature point by a searchable token which has a position and has a token string. The simplest way to convert feature point descriptors values to token string is to serialize them into a string with ‘_’ in between and with padding 0’s to make all values to a same designated width.

Thus, the starmap of our image (say file ./flower.jpg.tokens) become:

numTokens, numMetaValues

193, 0

x, y, weight, token

29, 45, 10872.885, 094_207_144_086_161_157_057_116_079_020_110_045_081_186_048_003_051_037_074_040_190_255_063_195_104_081_028_195_136_035_057_170

28, 48, 18028.014, 094_207_144_086_179_159_057_244_079_020_106_045_081_186_048_003_051_165_072_172_190_255_060_066_105_017_060_131_184_034_049_187

29, 77, 9569.717, 180_242_246_138_037_233_072_054_190_172_040_059_000_000_182_221_232_078_058_235_066_155_019_201_242_089_085_162_073_161_059_101

48, 70, 33739.203, 078_072_152_151_165_157_049_173_237_021_235_005_001_168_049_097_019_167_203_232_190_127_190_066_105_177_028_192_023_035_115_162

30, 94, 11110.741, 246_120_243_149_179_212_011_056_199_001_022_220_020_081_120_035_040_009_032_239_130_208_053_142_179_100_059_134_074_192_197_056

39, 107, 21723.09, 207_159_177_212_227_093_151_116_079_020_236_117_069_122_176_099_019_001_104_044_254_241_049_199_234_050_030_149_011_051_063_136

37, 111, 24305.402, 079_142_177_116_227_020_182_100_109_021_236_023_113_189_049_107_019_149_200_044_191_121_052_071_248_055_158_151_010_034_063_152

37, 111, 24057.121, 079_142_177_116_227_020_182_100_109_021_236_023_113_189_049_107_019_149_200_044_191_121_052_071_248_055_158_151_010_034_063_152

65, 96, 34579.105, 078_201_176_196_179_157_049_189_237_005_224_029_065_169_176_099_051_019_235_232_190_247_062_082_243_177_028_192_016_003_179_184

38, 132, 12426.787, 161_049_095_016_090_226_105_154_150_251_209_251_206_067_092_222_220_078_059_019_080_146_199_216_019_074_001_236_215_223_156_004

38, 133, 11265.69, 161_049_095_016_090_226_105_154_150_251_209_251_206_067_092_222_220_078_059_019_080_146_199_216_019_074_001_236_215_223_156_004

31, 149, 22061.219, 078_200_209_017_211_150_062_098_073_133_039_023_181_248_249_035_149_133_068_044_038_097_048_015_039_039_187_004_004_036_127_218

32, 149, 20467.809, 078_200_213_017_211_150_062_098_077_133_039_023_181_248_249_041_021_133_068_044_038_097_048_015_039_039_187_004_004_036_127_218

157, 31, 45518.227, 079_138_144_149_243_148_058_101_205_148_035_021_017_249_113_041_019_019_068_044_190_113_060_078_243_035_157_133_004_036_255_144

157, 32, 40789.016, 079_138_145_149_243_150_062_103_077_213_039_021_177_249_121_043_019_019_068_044_190_101_060_078_223_035_156_133_004_044_255_144

28, 165, 9488.463, 075_030_004_062_071_054_126_108_025_211_143_208_183_250_001_042_087_186_084_094_052_109_136_044_141_043_170_027_046_056_166_213

77, 124, 43385.34, 079_009_152_086_179_151_181_237_237_021_224_028_121_173_050_107_051_181_203_168_191_125_190_030_126_017_188_195_056_039_050_242

78, 124, 40718.72, 079_009_144_214_179_151_181_237_109_021_106_012_049_237_048_107_051_180_203_168_191_253_182_030_252_048_188_195_026_033_059_186

138, 65, 11868.975, 164_248_095_011_030_100_078_039_030_235_023_243_143_082_223_181_220_069_181_095_069_102_000_252_004_075_082_189_195_152_219_070

35, 169, 17004.424, 074_134_164_118_039_023_054_101_011_148_190_004_241_233_033_040_151_186_064_172_254_057_232_042_233_049_165_019_044_044_044_209

138, 66, 11810.855, 164_248_031_011_018_100_078_039_030_225_023_247_143_082_223_181_220_069_181_095_069_102_016_220_012_111_082_189_195_144_219_006

138, 68, 11108.025, 228_250_031_011_019_068_094_047_030_161_023_247_135_082_253_183_216_069_165_111_101_102_016_204_012_047_094_149_067_144_255_070

176, 33, 41427.168, 079_075_137_166_172_047_149_232_104_130_079_012_171_188_162_008_146_154_239_096_239_043_042_058_237_144_079_088_127_089_143_243

140, 70, 11313.073, 228_250_155_059_083_068_094_039_088_037_023_231_007_082_221_053_152_133_133_078_102_103_016_206_012_043_090_149_067_144_251_002

. . .


·       Image meta data will be kept in the tokens file to be indexed and incorporated into search as browsing constraints;

·       Configure a lower feature detector threshold for index and a higher threshold for search;

·       If we want fewer tokens, always select top tokens by importance (i.e., weight or score);

·       Always sort tokens by positions before index or search;

·       If token string is longer than maximal limit of a search engine, breaking the long token into multiple tokens at the same position still works great.

·       Or, because, with starmap search, the search problem become over-determined, we can also just use few feature point descriptors values for a token.

·       Feature points from multiple features detectors can be mixed together to index and search as long as we sort them by positions.


2.3 Search-ability Analysis of Image Features Points

We can analyze the search-ability of our starmaps by calculating the “differences” of the image starmaps to index and the image stamap to search. The differences include

·       The number or ratio of search points missed from the index points;

·       The maximal difference of token attribute values; and

·       The maximal difference of point-to-point distances.

This is a good way to evaluate how good our features detectors and descriptors extractor are and estimate controlled-tolerances values we should use in search (i.e., search relaxation).

Figure 2. Sample Image Indexed Feature Points vs Search Feature Points vs Selected Search Feature Points

(while points – indexed points, yellow cycles – search points, green boxes – selected search points).


3.0 The Search Framework – ElasticSearch/Lucene With Image Starmap Search Plugin

We use open-source ElasticSearch/Lucene to perform our image starmap search in a large scale. This framework is shown in Figure 3.

Starmaps search related features are implemented as a library to Lucene and as a plugin to ElasticSearch.

Created with GIMP


3.1 ElasticSearch Image Indexer



                                    <filePath (for image or tokens file or directory to index)>

                                    <fieldsToIndex (default tokens)>

                                    <startDocId (default 10000)>

                                    <indexTokenAttributes (default 1, 0/1)>

                                    <addTokenAttributeLabelToAttributeValue (default 1, 0/1)>

                                    <numTopFeaturePointsToDetect (default 200, by score)>

                                    <numFeaturePointTokensToUse (default 0 to use all)>

                                    <numValuesPerTokenToUse (default 5, first N)>

                                    <treatmentOfRemainingValues (default 0, 0/1/2/3)>

                                    <clusterName (default elasticsearch)>

                                    <index (default sm101)>

                                    <type (default starmap)>

                                    <host (default localhost)>

                                    <port (default 9300)>

                                    <debug (default 0)>

                                    <tokensDir (default ./tokens)>


                                    <detectorType (default SIFT, SIFT/SURF/ORB/FAST/...)>

                                    <descriptorType (default SIFT/SURF/BRIEF/ORB)>

                                    <threshold4FAST (e.g., 80)>

                                    <hessianThreshold4SURF (e.g., 800)>

                                    <splitLongTokenToKeepAllDescriptorValuesBatchSize (default 0 for feature off)>

                                    <descriptorValuesSelectionBitMask (default null for feature off, e.g. 001001...)>

                                    <attrValueWidth (default 3)>

                                    <configFile (to specify a batch of above commandline settings in a file, settings in file can be overwritten by above individual settings)>


e.g., ESImageStarmapIndexer detectorType:SURF descriptorType:BRIEF hessianThreshold4SURF:600 numTopFeaturePointsToDetect:500 filePath:./media fieldsToIndex:tokens


3.2 ElasticSearch Image Searcher



                                    <filePath (image or tokens file to search)>

                                    <fieldsToSearch (default tokens)>

                                    <numTopFeaturePointsToDetect (default 200)>

                                    <numFeaturePointTokensToUse (default 5, 0 to use all)>

                                    <numFeaturePointTokensToSelect (default 0 to be same as numFeaturePointTokensToUse)>

                                    <minDeltaDistance4TokensToSelect (default 10)>

                                    <numValuesPerTokenToUse (default 5, first N)>

                                    <treatmentOfRemainingValues (default 0, 0/1/2/3)>

                                    <useSpansQuery (default 1, 0/1)>

                                    <maxHits (default 20)>

                                    <searchTokenAttributes (default 0, 0/1/2)>

                                    <increaseTolerance (default 1)>

                                    <decreaseTolerance (default 1)>

                                    <addTokenAttributeLabelToAttributeValue (default 1, 0/1)>

                                    <numMissingedAttributesAllowed (default 0)>

                                    <detectScale (default 0, 0/1)>

                                    <numMissingPointsAllowed (default 1)>

                                    <distanceTolerance (default 0)>

                                    <clusterName (default elasticsearch)>

                                    <index (default sm101)>

                                    <type (default starmap)>

                                    <host (default localhost)>

                                    <port (default 9300)>

                                    <sort_by (default empty)>

                                    <debug (default 0)>

                                    <tokensDir (default ./tokens)>

                                    <browseRequest (default empty)>

                                    <searchModel (default spans, keywords/kand/spans)>

                                    <K (defualt -1 for off)>

                                    <scalePower (default 100)>

                                    <numMissingFieldAllowed (default 0)>

                                    <penaltyPerMissingField (default 0.0)>

                                    <penaltyPerMissingPoint (default 0.0)>

                                    <penaltyPerMissingAttribute (default 0.0)>


                                    <detectorType (default SURF, SIFT/SURF/ORB/FAST/...)>

                                    <descriptorType (default BRIEF, SIFT/SURF/BRIEF/ORB)>

                                    <threshold4FAST (e.g., 80)>

                                    <hessianThreshold4SURF (e.g., 800)>

                                    <splitLongTokenToKeepAllDescriptorValuesBatchSize (default 0 for feature off)>

                                    <descriptorValuesSelectionBitMask (default null for feature off, e.g. 001001...)>

                                    <attrValueWidth (default 3)>

                                    <configFile (to specify a batch of above commandline settings in a file, settings in file can be overwritten by above individual settings)>


e.g., ESImageStarmapSearcher detectorType:SURF descriptorType:BRIEF hessianThreshold4SURF:800 filePath:./media/flower.jpg fieldsToSearch:tokens searchTokenAttributes:1


3.3 Two Levels of Indices

We index starmaps into two levels of indices:

·       Token-Attributes to Tokens Index for controlled-tolerances token suggestions search;

·       Tokens to Starmaps Index for keywords search, bag of words search, and Starmap Spans Search;

Not mention that image meta-data are always indexed for acting as search constraints.

Created with GIMP


4.0 The Core Matching Algorithms – Starmap Spans Matching With User-Controlled Tolerances


4.1 Controlled Tolerances

o   Allow tolerances to token attribute values – thus, allow tokens suggestions search

o   Allow missing token attribute values

o   Allow tolerances to distances between stars of target starmap

o   Allow missing stars of target starmap

With all these search target relaxation, we almost have a continuous control knob to control our search from one end of exact matching of starmaps to the other end of bag-of-words search.

Let’s give a simple example of tokens suggestion search with controlled tolerance. Support it is a person search index. We search a target person with weight of 75kg and height of 175mm, i.e., our target token is “075_175”.   We give our attribute value tolerance +/- 2. Our token suggestion search for this token 075_175 becomes

(OR of each person of <all person with weight between 073 and 077>) AND (OR of each person of <all person with height between 173 and 177>)

Most of search engines (e.g., Lucene) can retrieve all tokens of a given range such as <all person with weight between 073 and 077> or <all person with height between 173 and 177> directly and very quickly. The AND join of the retrieved tokens sets will give the suggestions for the given target token. For example, tokens 074_174,  076_176 and 075_176 may be return for our given target token 075_175.

The original target token (e.g., 075_175) may be or may not be in the suggestions list. This is the beauty of controlled tolerances search. Without it, image starmaps search with a huge number of features descriptors values can rarely find any matching image except the target image itself.


4.2 Iteration Order of Image Starmap Stars

Modern search engines were developed for one-dimensional words documents search. To enable such search engines to iterate two-dimensional image starmap or image pixels, we have to define an iteration order to do this.

Position in image is in pixel coordination. Therefore, image feature point is in pixel coordination. So as image starmap star.

Among many possible choices, we determined to iterate image pixels in an increase order of (x+y, x), as shown in Figure 5. This iteration order may fit human orientation better and give a better chance to reach the middle of the image quicker. 

Created with GIMP

For example, the following two image starmap tokens

x, y, weight, token

29, 45, 10872.885, 094_207_144_086_161_157_057_116_079_020_110_045_081_186_048_003_051_037_074_040_190_255_063_195_104_081_028_195_136_035_057_170

28, 48, 18028.014, 094_207_144_086_179_159_057_244_079_020_106_045_081_186_048_003_051_165_072_172_190_255_060_066_105_017_060_131_184_034_049_187


will be indexed as following in a one-dimensional search engine index

position=(x+y), payload=x, token

74, 29,  094_207_144_086_161_157_057_116_079_020_110_045_081_186_048_003_051_037_074_040_190_255_063_195_104_081_028_195_136_035_057_170

76, 28,  094_207_144_086_179_159_057_244_079_020_106_045_081_186_048_003_051_165_072_172_190_255_060_066_105_017_060_131_184_034_049_187


Modern search engines (e.g., Lucene) allow each indexed token posting to carry a payload together with token’s position and text.  Using position=(x+y) and payload=x here enables us to figure out token’s position in the original image in the following starmap spans matching algorithm.  More payload values can be kept as needed for more complex target matching.


4.3 Image Starmaps Spans Matching Algorithm and Implementation

By now, we introduced

·       How to transform images into image starmaps to index and search;

·       Our search framework for large scale industry grade search;

·       What is controlled tolerance search and what are the common tolerances we can have to relax search; and

·       The iteration order of image pixels or starmap stars in 1-dimensional keywords search engine.

It is the time to introduce the last and most important component: image starmaps spans matching algorithm and implementation.

4.3.1 Spans and Spans Search

Lucene search engine introduced spans search and supports a variety of spans matching algorithms. A spans is an iterator to a  “pattern” or a “concept”.  A spans usually has API functions next(),  skipTo(docId, pos),  doc(), start(),  end(), and done().  It finds matching and returns matching in ascending order of form of (docId(), start(), end()).

The simplest spans is TermSpans (say TermSpans(“hello”)) which iterates through the postings list in the index for a given term (i.e., “hello”) and returns matching spans in ascending order like (doc=101, pos=140), (doc=203,pos=456),…

The next simple and useful spans is OrTermsSpans (say OrTermsSpans([074_174,  076_176, 075_176])) which iterates through the OR logical combination postings list of the multiple postings lists in the inverted index for a given set of terms (i.e.,  [074_174,  076_176, 075_176]) and returns matching spans in ascending order like (doc=201, pos=123), (doc=905,pos=45),…

The beauty and advantage of using spans is to wrap details behind spans interface API and concentrate in using one level of spans to build up and to match next level of spans. For example, from TermsSpan to OrTermSpans, from OrTermSpans to RowSpans, from RowSpans to MatrixSpans, from MatrixSpans to CubicSpans.  This gives a much cleaner and smarter way to tackle complex pattern search problems.

Notice that spans search maintains the polynomial computation complexity of search. Even if we add to spans functions like tag(tagId) and rewindToTag(tagId) to make search to do a little rewind (say N rewind) within an indexed doc postings list to make search much more powerful, the computation complexity is still polynomial.

4.3.2 Starmap Spans Search

To tackle the image starmap search problem, we introduced StarmapSpans to do image starmap matching.  Reference Figure 6, the matching flow works likes this:

1)     Start with a search target image and transform it into starmap in the same way how we transformed the indexed images, say our target starmap has A, B and C three stars;

2)     Perform token suggestions search for each of the target starmap stars A, B and C to find out suggested similar stars for each in the index;

3)     Use a TermSpans to iterate the postings list in the inverted index for a target star (i.e., star A, B, or C) or a suggested similar star;

4)     Use OrTermsSpans to OR iterate the combined postings list of the multiple postings lists in the inverted index for all the OR stars for a given target star (i.e., star A, B, or C);

5)     Construct StarmapSpans from a vector of above OrTermsSpan or TermSpans, one for each target star A, B or C;

6)     Advance the spans of all stars of the target starmap (i.e., stars A, B and C) to a same indexed doc, say doc101;

7)     Calculate the earliest logical starting position of the target image in the current indexed doc101 (i.e., the top-left corner of the black box in figure 6) from the current positions of the target starmap stars A, B and C in the indexed doc101;

8)     Advance each star’s spans (i.e., stars A, B, or C) to its desired position or beyond with respect to the found logical start position using skipTo() function;

9)     Check if the current positions of stars A, B and C match the target starmap by all measures and all tolerances.

10)   If match, emit a matching span with (doc(), start(), end(),…).

11)   If not match, advance the spans of the star which determined the logical starting position in step 7 by one next() call to avoid looping.

12)   Repeat 7) to 11) until advance out of the current indexed doc;

13)   Repeat 6) to 12) until all postings lists of all target stars iterated.

4.4 Detect Scale and Rotation

o   Allow to detect scale and rotation between target image and a matching indexed image

A scale and rotation phase is needed before the regular starmap spans search. Once we find the scale and the rotation center point and angle, we scale and rotate our target starmap before search it in the indexed starmaps.

To implement scale and rotation detection using the same target starmap and indexed postings lists, we need add tag(<number>) and rewindToTag(<number>) functions to our StarmapSpans and all upper stream TermSpans and OrTermsSpans. 

To improve computation performance, we introduced a CachedOneDocStarmapSpans for the starmap search with scale and rotation detection. Caching one doc a time can avoid costly rewinding of OrTermsSpans.

With these new tools and supporting functions, algorithm for detecting scale and rotation is much simpler than what you might think:

a)      Pick a target point in the target starmap and one of its peer points in the indexed starmap, use the pair as pivot point, say (x0, y0);

b)     Every other target point P in target starmap to each of it possible peer points P’ in the indexed starmap implies a scale and a rotation angle of point projection, represented by {<scale>, <angle>, <x0>, <y0>} (e.g., {1.5, 1.1, 100, 129};

c)      Select the most supported {<scale>, <angle>, <x0>, <y0>} set by popular votes of all target points in target starmap;

d)      If number of votes by target points for a {<scale>, <angle>, <x0>, <y0>} set is equal or greater than <number of target points> - <number of missed target points allowed> - 1, it is a valid scale and rotation to use;

e)      Repeat a) to d) until one set of valid scale and rotation is found.

Created with GIMP

Figure 7 gives a graphic view of this process.  As you can see, it is difficult to detect the rotation of the indexed image (i.e., the large gay box) and the result is not very useful to rotate and to scale the target image (i.e., the small black box) since we still need to find a pivot point to rotate and a scale to scale.

On the other hand, pin one target point in the target image to an indexed point in the indexed image (i.e., select a pivot point) and then rotate and scale other target points about the pivot point to see if there exists a scale and angle pair such that enough number of target points agree with each other.

In image starmaps search, the number of projections from target point P in the target image to indexed point P’ in the indexed image is relatively small. Thus, the computation of adding scale and rotation detection in image starmaps search is minimal.


4.5 Search Multiple Related Starmaps in A Same Starmap Spans Search

o   Allow to do joint matching of multiple starmaps of multiple images of same object from different angles or channels

As mentioned before, a high level pattern spans MultiStarmapsSpans can be build with multiple child StarmapSpans. The matching flow is very similar. More scoring logics need to be added, such as in the case that missing a StarmapSpans of of view angle or channel is allowed.

5.0 Summary

·      Every image is a starmap of features points.

·      We have found a way to read starmap like poetry, thus we can search billions of images and sub-images just like how we search words documents.

·       It is our position-specific, controlled-tolerances, image or sub-image starmaps spans search.