skip to main content
10.1145/1518701.1518990acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Designable visual markers

Published: 04 April 2009 Publication History

Abstract

Visual markers are graphic symbols designed to be easily recognised by machines. They are traditionally used to track goods, but there is increasing interest in their application to mobile HCI. By scanning a visual marker through a camera phone users can retrieve localised information and access mobile services.
One missed opportunity in current visual marker systems is that the markers themselves cannot be visually designed, they are not expressive to humans, and thus fail to convey information before being scanned. This paper provides an overview of d-touch, an open source system that allows users to create their own markers, controlling their aesthetic qualities. The system runs in real-time on mobile phones and desktop computers. To increase computational efficiency d-touch imposes constraints on the design of the markers in terms of the relationship of dark and light regions in the symbols. We report a user study in which pairs of novice users generated between 3 and 27 valid and expressive markers within one hour of being introduced to the system, demonstrating its flexibility and ease of use.

References

[1]
d-touch source code. http://sourceforge.net/projects/libdtouch.
[2]
Mobile codes consortium. http://www.mobilecodes.org/, February 2007.
[3]
Ross Bencina, Martin Kaltenbrunner, and Sergi Jordà. Improved topological fiducial tracking in the reactivision system. In Proc. IEEE International Workshop on Projector--Camera Systems (Procams 2005), San Diego, USA, 2005.
[4]
Enrico Costanza and Leinss Mirja. Telling a story on a tag: The importance of markers' visual design for real world applications (paper presentation). In Proceedings of the Mobile Interaction with the Real World Workshop at Mobile HCI2006, Espoo, Finland, sep 2006.
[5]
Enrico Costanza and John Robinson. A region adjacency tree approach to the detection and design of fiducials. In VVG, pages 63--69, 2003.
[6]
Enrico Costanza, Simon B Shelley, and John Robinson. D-touch: A consumer-grade tangible interface module and musical applications. In Proceedings of Conference on Human-Computer Interaction (HCI03), 2003.
[7]
Enrico Costanza, Simon B Shelley, and John Robinson. Introducing audio d-touch: A tangible user interface for music composition and performance. In Proceedings of the 2003 International Conference on Digital Audio EffectsDAFx03, 2003.
[8]
Diego Lopez de Ipina, Paulo R. S. Mendonca, and Andy Hopper. Trip: A low-cost vision-based location system for ubiquitous computing. Personal Ubiquitous Comput., 6(3):206--219, 2002.
[9]
Mark Fiala. Artag, a fiducial marker system using digital techniques. In CVPR (2), pages 590--596, 2005.
[10]
ISO. International standard, iso/iec18004. ISO International Standard, Jun 2000.
[11]
ISO. International standard, iso/iec16022â"international symbology specification, data matrix. ISO standard, 2006.
[12]
Woodland Norman J. and Bernard Silver. Classifying apparatus and method, October 1952.
[13]
David J. Johnston and Adrian F. Clark. A vision-based location system using fiducials. In VVG, pages 159--166, 2003.
[14]
H. Kato and Mark Billinghurst. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd International Workshop on Augmented Reality (IWAR 99), San Francisco, USA, October 1999.
[15]
Kaj Mäkelä, Sara Belt, Dan Greenblatt, and Jonna Häkkilä. Mobile interaction with visual and rfid tags: a field study on user perceptions. In CHI '07: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 991--994, New York, NY, USA, 2007. ACM.
[16]
Mathias Mohring, Christian Lessig, and Oliver Bimber. Video see-through ar on consumer cell-phones. In ISMAR '04: Proceedings of the Third IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'04), pages 252--253, Washington, DC, USA, 2004. IEEE Computer Society.
[17]
Takao Nakamura, Atsushi Katayama, Masashi Yamamuro, and Noboru Sonehara. Fast watermark detection scheme from camera-captured images on mobile phones. IJPRAI, 20(4):543--564, 2006.
[18]
Eamonn O'Neill, Peter Thompson, Stavros Garzonis, and Andrew Warr. Reach out and touch: Using nfc and 2d barcodes for service discovery and interaction with mobile devices. In Anthony LaMarca, Marc Langheinrich, and Khai N. Truong, editors, Pervasive, volume 4480 of Lecture Notes in Computer Science, pages 19--36. Springer, 2007.
[19]
Jun Rekimoto. Navicam: A magnifying glass approach to augmented reality systems. Teleoperators and Virtual Environ., 6(4):399--412, 1997.
[20]
Jun Rekimoto and Yuji Ayatsuka. Cybercode: designing augmented reality environments with visual tags. In DARE '00: Proceedings of DARE 2000 on Designing augmented reality environments, pages 1--10, New York, NY, USA, 2000. ACM Press.
[21]
Michael Rohs and Philipp Zweifel. A conceptual framework for camera phone-based interaction techniques. In Pervasive Computing: Third International Conference, PERVASIVE 2005, number 3468 in Lecture Notes in Computer Science (LNCS), Munich, Germany, May 2005. Springer-Verlag.
[22]
Azriel Rosenfeld. Adjacency in digital pictures. Information and Control, 26(1):24--33, September 1974.
[23]
Ron Shamir and Dekel Tsur. Faster subtree isomorphism. J. Algorithms, 33(2):267--280, 1999.
[24]
G. Thomas, J. Jin, T. Niblett, and C. Urquhart. A versatile camera position measurement system for virtual reality tv production. In Proc. Internat. Broadcasting Convention (IBC'97), pages 284--289, 1997.
[25]
Tikiwiki. http://tikiwiki.org/.
[26]
Eleanor Toye, Richard Sharp, Anil Madhavapeddy, David Scott, Eben Upton, and Alan Blackwell. Interacting with mobile services: an evaluation of camera-phones and visual tags. Personal Ubiquitous Comput., 11(2):97--106, 2007.

Cited By

View all
  • (2024)Uncovering the Metaverse within Everyday Environments: A Coarse-to-Fine ApproachBehaviors2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC61105.2024.00074(499-509)Online publication date: 2-Jul-2024
  • (2023)High Accuracy and Wide Range Recognition of Micro AR Markers with Dynamic Camera Parameter ControlElectronics10.3390/electronics1221439812:21(4398)Online publication date: 24-Oct-2023
  • (2023)Low-cost and Non-visual Labels Using Magnetic PrintingProceedings of the ACM on Human-Computer Interaction10.1145/35932327:EICS(1-18)Online publication date: 19-Jun-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '09: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
April 2009
2426 pages
ISBN:9781605582467
DOI:10.1145/1518701
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 April 2009

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. UI toolkits
  2. fiducial recognition
  3. mobile HCI
  4. mobile devices
  5. user studies
  6. visual marker design
  7. visual marker recognition

Qualifiers

  • Research-article

Conference

CHI '09
Sponsor:

Acceptance Rates

CHI '09 Paper Acceptance Rate 277 of 1,130 submissions, 25%;
Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)18
  • Downloads (Last 6 weeks)2
Reflects downloads up to 18 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Uncovering the Metaverse within Everyday Environments: A Coarse-to-Fine ApproachBehaviors2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC61105.2024.00074(499-509)Online publication date: 2-Jul-2024
  • (2023)High Accuracy and Wide Range Recognition of Micro AR Markers with Dynamic Camera Parameter ControlElectronics10.3390/electronics1221439812:21(4398)Online publication date: 24-Oct-2023
  • (2023)Low-cost and Non-visual Labels Using Magnetic PrintingProceedings of the ACM on Human-Computer Interaction10.1145/35932327:EICS(1-18)Online publication date: 19-Jun-2023
  • (2022)DynaTags: Low-Cost Fiducial Marker MechanismsProceedings of the 2022 International Conference on Multimodal Interaction10.1145/3536221.3556591(432-443)Online publication date: 7-Nov-2022
  • (2022)Connecting Everyday Objects with the Metaverse: A Unified Recognition Framework2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC54236.2022.00063(401-406)Online publication date: Jun-2022
  • (2020)TIP-Toy: a tactile, open-source computational toolkit to support learning across visual abilitiesProceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3373625.3417005(1-14)Online publication date: 26-Oct-2020
  • (2020)Tangible Interfaces with Printed Paper MarkersProceedings of the 2020 ACM Designing Interactive Systems Conference10.1145/3357236.3395578(909-923)Online publication date: 3-Jul-2020
  • (2019)Automating the Intentional Encoding of Human-Designable MarkersProceedings of the 2019 CHI Conference on Human Factors in Computing Systems10.1145/3290605.3300417(1-11)Online publication date: 2-May-2019
  • (2019)Ready Species One: Exploring the Use of Augmented Reality to Enhance Systematic Biology with a Revision of Fijian Strumigenys (Hymenoptera: Formicidae)Insect Systematics and Diversity10.1093/isd/ixz0053:6Online publication date: 12-Nov-2019
  • (2019)ShadowHunter: Facilitating Children’s Outdoor Exploration with ShadowsHCI International 2019 – Late Breaking Papers10.1007/978-3-030-30033-3_23(292-305)Online publication date: 26-Jul-2019
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media