ABSTRACT
The world is full of information, interfaces and environments that are inaccessible to blind people. When navigating indoors, blind people are often unaware of key visual information, such as posters, signs, and exit doors. When accessing specific interfaces, blind people cannot independently do so without at least first learning their layout and labeling them with sighted assistance. My work investigates interactive systems that integrates computer vision, on-demand crowdsourcing, and wearables to amplify the abilities of blind people, offering solutions for real-time environment and interface navigation. My work provides more options for blind people to access information and increases their freedom in navigating the world.
- Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010. VizWiz: nearly realtime answers to visual questions. In Proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10). ACM, New York, NY, USA, 333--342. Google ScholarDigital Library
- Cole Gleason, Anhong Guo, Gierad Laput, Kris Kitani, and Jeffrey P. Bigham. 2016. VizMap: Accessible Visual Information Through Crowdsourced Map Reconstruction. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). ACM, New York, NY, USA, 273--274. Google ScholarDigital Library
- Anhong Guo, Xiang 'Anthony' Chen, Haoran Qi, Samuel White, Suman Ghosh, Chieko Asakawa, and Jeffrey P. Bigham. 2016. VizLens: A Robust and Interactive Screen Reader for Interfaces in the Real World. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 651--664. Google ScholarDigital Library
- Anhong Guo, Jeeeun Kim, Xiang 'Anthony' Chen, Tom Yeh, Scott E. Hudson, Jennifer Mankoff, and Jeffrey P. Bigham. 2017. Facade: Auto-generating Tactile Interfaces to Appliances. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 5826--5838. Google ScholarDigital Library
Index Terms
- Crowd-AI Systems for Non-Visual Information Access in the Real World
Recommendations
Crowd-AI Systems for Non-Visual Information Access in the Real World
UIST '18 Adjunct: Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and TechnologyThe world is full of information, interfaces and environments that are inaccessible to blind people. When navigating indoors, blind people are often unaware of key visual information, such as posters, signs, and exit doors. When accessing specific ...
Making Real-World Interfaces Accessible Through Crowdsourcing, Computer Vision, and Fabrication
W4A '17: Proceedings of the 14th International Web for All ConferenceThe world is full of physical interfaces that are inaccessible to blind people, from microwaves and information kiosks to thermostats and checkout terminals. Blind people cannot independently use such devices without at least first learning their layout,...
Facade: Auto-generating Tactile Interfaces to Appliances
CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing SystemsCommon appliances have shifted toward flat interface panels, making them inaccessible to blind people. Although blind people can label appliances with Braille stickers, doing so generally requires sighted assistance to identify the original functions ...
Comments