According to a new study with strategically improved exactness of images by AI, maybe the “dumb” interactions are helping it to learn. AI researchers can design programs faster, including commanding robots to diagnose diseases. Natasha Jaques, a computer scientist at Google studying machine learning but who was not involved with the research, said, “It’s super cool work,”
AI depends on a method called machine learning to become competent. The data has a pattern like you make it learn a color “Teal” by making it go through thousands of images containing teal objects. However, it is not that simple; it has gaps even in a vast database. The image identifies the color as Teal, but questions arise like, what is it?
In Active Learning, AI evaluates its knowledge and asks for more data, in which case researchers pay online workers to feed more information, but it does not count. A team of researchers from Stanford University led by Ranjay Krishna from the University of Washington, Seattle, conducted another study. In this, they trained machine learning not just to discover cavities in its data but to compile dumb questions about images answered by strangers patiently.
Q: “What is the shape of the sink?”
A: “It’s a square.”
Kurt Gray, a social psychologist at the University of North Carolina, Chapel Hill, notes that it is important to note how AI presents itself. He has studied human-AI interaction but was not involved in the study.
He states, “In this case, you want it to be like a kid, right?”
“Otherwise, people might think you’re a troll asking seemingly ridiculous questions.”
AI got rewarded by its team for its intelligible questions. When people replied to its query, they received feedback to adjust its working to behave similarly in the future. In addition, AI understood social norms and language by improving its ability to questions that were more relatable and answerable.
Some of the components in the new AI are neural networks and complex mathematical functions inspired by the brain’s architecture. But, Krishna notes, “There are many moving pieces … that all need to play together.”
“One component selected an image on Instagram—say a sunset—and a second asked about that image—for example, “Is this photo taken at night?” Additional components extracted facts from reader responses and learned about images from them.”
The team reported in the National Academy Of Sciences, “Across 8 months and more than 200,000 questions on Instagram, the system’s accuracy at answering questions similar to those it had posed increased 118%.”
The system which posted the questions on Instagram needed to be tuned to increase response values and had only a 72% improvement in accuracy as people rarely noticed it.
According to Jaques, “The main innovation was rewarding the system for getting humans to respond, which is not crazy from a technical perspective, but very important from a research-direction perspective.”
She is also in awe of the significant, real-world action on Instagram. For example, before posting, people checked all the questions generated by AI for derogatory content. Researchers hope that systems like these will make AI smarter in understanding, say Teal is a color and is a form of green, robots ask directions for their assigned chores, chatbots that converse with people about various issues, etc.
Jaques says, “Social skills could also help AI adjust to new situations on the fly. A self-driving car, for example, might ask for help navigating a construction zone. “If you can learn effectively from humans, that’s a general skill.”