In: Balcan MF, Weinberger KQ (eds) Proceedings of the 33rd international conference on machine learning, proceedings of machine learning research, vol 48. Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 217–235ĭanihelka I, Wayne G, Uria B, Kalchbrenner N, Graves A (2016) Associative long short-term memory. MIT Press, Cambridgeīeyer K, Goldstein J, Ramakrishnan R, Shaft U (1999) When Is nearest neighbor meaningful? In: Beeri C, Buneman P (eds) Database theory-ICDT’99. CoRR arxiv:abs/1503.07469īellman RE (1961) Adaptive Control Processes: A Guided Tour. Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 420–434Īhmad S, Hawkins J (2015) Properties of sparse distributed representations and their application to hierarchical temporal memory. In: Van den Bussche J, Vianu V (eds) Database theory-ICDT 2001. The VSA approach not only reduces the number of comparison, it also performs slightly better than using individual comparisons in both plots.Īggarwal CC, Hinneburg A, Keim DA (2001) On the surprising behavior of distance metrics in high dimensional space. For the query results in the right plot, all views \(I^k_x\) and \(I^k_y\) are stored in the database and a single query is made (the number of data base entries and thus comparisons has now doubled compared to the bundling approach). For the distance evaluation in the left plot, we use the closest of the two individual results for each query. 7 show the results when comparing the query image to the individual images \(I^k_x\) and \(I^k_y\) (instead of their bundle). For comparison without bundling, the blue curves in Fig. The paper closes with a discussion of current limitations and open questions.ĭetails: the red curve in the left plot evaluates vector similarities (the query image index q is known and we compare the similarity of \(I^k_x + I^k_y\) and \(I^\) is about 0.17, and the right plot tells us that for about 53% of the objects the query image was most similar to the correct bundle. To bridge the gap to practical applications, we describe and experimentally demonstrate the application of VSAs to three different robotic tasks: viewpoint invariant object recognition, place recognition and learning of simple reactive behaviors. This is accompanied by references to existing applications of VSAs in the literature. In this work, we aim at providing an easy to access introduction to the underlying mathematical concepts and describe the existing computational implementations in form of vector symbolic architectures (VSAs). Although there are surprising and impressive results in the literature, the application to practical problems in the area of robotics is so far very limited. The goal is to exploit their representational power and noise robustness for a broad range of computational tasks. 10,000 dimensional) with a set of carefully designed operators to perform symbolic computations with large numerical vectors. Hyperdimensional computing combines very high-dimensional vector spaces (e.g.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |