skip to main content
10.1145/1943552.1943554acmconferencesArticle/Chapter ViewAbstractPublication PagesmmsysConference Proceedingsconference-collections
research-article

GPS-aided recognition-based user tracking system with augmented reality in extreme large-scale areas

Authors Info & Claims
Published:23 February 2011Publication History

ABSTRACT

We present a recognition-based user tracking and augmented reality system that works in extreme large scale areas. The system will provide a user who captures an image of a building facade with precise location of the building and augmented information about the building. While GPS cannot provide information about camera poses, it is needed to aid reducing the searching ranges in image database. A patch-retrieval method is used for efficient computations and real-time camera pose recovery. With the patch matching as the prior information, the whole image matching can be done through propagations in an efficient way so that a more stable camera pose can be generated. Augmented information such as building names and locations are then delivered to the user. The proposed system mainly contains two parts, offline database building and online user tracking. The database is composed of images for different locations of interests. The locations are clustered into groups according to their UTM coordinates. An overlapped clustering method is used to cluster these locations in order to restrict the retrieval range and avoid ping pong effects. For each cluster, a vocabulary tree is built for searching the most similar view. On the tracking part, the rough location of the user is obtained from the GPS and the exact location and camera pose are calculated by querying patches of the captured image. The patch property makes the tracking robust to occlusions and dynamics in the scenes. Moreover, due to the overlapped clusters, the system simulates the "soft handoff" feature and avoid frequent swaps in memory resource. Experiments show that the proposed tracking and augmented reality system is efficient and robust in many cases.

Skip Supplemental Material Section

Supplemental Material

110223_26192_02_acm.mp4

mp4

132.8 MB

References

  1. S. Agarwal, N. Snavely, I. Simon, S. Seitz, and R. Szeliski. Building rome in a day. In Proceedings of the International Conference on Computer Vision (ICCV), 2009.Google ScholarGoogle ScholarCross RefCross Ref
  2. P. Azad, T. Asfour, and R. Dillmann. Combining harris interest points and the sift descriptor for fast scale- invariant object recognition. In EEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool. Speeded-up robust features (surf). Computer Vision and Image Understanding, 110(3):346--359, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. F. Dellaert, W. Burgard, D. Fox, and S. Thrun. Using the condensation algorithm for robust, vision-based mobile robot localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1999.Google ScholarGoogle ScholarCross RefCross Ref
  5. Z. Dodds and G. D. Hager. A color interest operator for landmark-based navigation. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 655--660, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. N. Henze, T. Schinke, and S. Boll. What is that? object recognition from natural features on a mobile phone. In Proceedings of the Workshop on Mobile Interaction with the Real World, 2009.Google ScholarGoogle Scholar
  7. I. Horswill. Polly: A visisn-based artificial agent. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 824--829, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. D. Kortenkamp and T. Weymouth. Topological mapping for mobile robots using a combination of sonar and vision sensing. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 1972--1978, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. Kosecka, F. Li, and X. Yang. Global localization and relative positioning based on scale-invariant keypoints. Robotics and Autonomous Systems, 52(1), 2005.Google ScholarGoogle Scholar
  10. B. Krose and R. Bunschoten. Probabilistic localization by appearance models and active vision. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 1999.Google ScholarGoogle ScholarCross RefCross Ref
  11. D. Lowe. Object recognition from local scaleinvariant features. In Proceedings of the Seventh International Conference on Computer Vision (ICCV), 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. K. Mikolajcyk and C. Schmid. An affine invariant interest point detector. In Proceedings of the International Conference on Computer Vision (ICCV), 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. E. Rosten and T. Drummond. Machine learning for high-speed corner detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 430--443, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Se, D. Lowe, and J. Little. Vision-based mobile robot localization and mapping using scale-invariant features. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages 2051--2058, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  16. H. Tamimi, H. Andreasson, A. Treptow, T. Duckett, and A. Zell. Localization of mobile robots with omnidirectional vision using particle filter and iterative sift. In Proceedings of the European Conference on Mobile Robots (ECMR), 2005.Google ScholarGoogle Scholar
  17. I. Ulrich and I. Nourbakhsh. Appearance-based place recognition for topological localization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2000.Google ScholarGoogle ScholarCross RefCross Ref
  18. D. Wagner, G. Reitmayr, A. Mulloni, T. Drummond, and D. Schmalstieg. Pose tracking from natural features on mobile phones. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. Wolf, W. Burgard, and H. Burkhardt. Robust vision-based localization for mobile robots using an image retrieval system based on invariant features. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2002.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. GPS-aided recognition-based user tracking system with augmented reality in extreme large-scale areas

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          MMSys '11: Proceedings of the second annual ACM conference on Multimedia systems
          February 2011
          294 pages
          ISBN:9781450305181
          DOI:10.1145/1943552

          Copyright © 2011 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 23 February 2011

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate176of530submissions,33%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader