HomeTechThe new GPT-4 app can change lives

The new GPT-4 app can change lives

The first application to take advantage of the image recognition capabilities of GPT-4 has been described as “life changing” by visually impaired users.

be my eyesa Danish startup, integrated the AI model in a new role for people who are blind or partially sighted. Called “Virtual Volunteer,” the object recognition tool can answer questions about any image submitted.

Imagine, for example, that a user is hungry. They can simply photograph an ingredient and request related recipes. If they prefer to eat out, they can snap a picture of a map and get directions to a restaurant. Upon arrival, they can take a photo of the menu and listen to the options. If they then want to burn off the extra calories at a gym, they can use their smartphone’s camera to find a treadmill.

“I know we’re in the middle of an AI hype cycle right now, but several of our beta testers have used the phrase ‘life-changing’ when describing the product,” says Mike Buckley, CEO of By My Eyes, to TNW.

“This has the opportunity to be transformative by empowering the community with unprecedented resources to better navigate physical environments, address everyday needs, and gain more independence.”

Virtual Volunteering take advantage of an OpenAI software update. Unlike previous iterations of the company’s models, GPT-4 is multimodal, meaning it can parse images and text as inputs.

Be My Eyes took the opportunity to test the new functionality. While text-to-image systems are nothing new, the start up I had never before been convinced of the performance of the software.

“From too many bugs to the inability to chat, the tools available on the market weren’t equipped to address many of our community’s needs,” Buckley says. “The image recognition offered by GPT-4 is superior, and the analytics and conversational layers powered by OpenAI increase the value and utility exponentially.”

Be My Eyes previously supported users exclusively with human volunteers. According to OpenAI, the new function can generate the same level of context and understanding. But if the user doesn’t get a good response or just prefers a human connection, they can still call a volunteer.