This AI algorithm can match the average American on real SAT questions
This AI algorithm tin can match the average American on real SAT questions
Yeah, yep — of course a calculator won at a math competition. That'south not the point. This story, which concerns a rather amazing program called GeoS from the Allen Institute for Artificial Intelligence (AI2), is most the ability of AI to usefully appoint with the world. To a computer, with a brain literally structured for these sorts of operations, the math SAT is not a test on adding, just reading comprehension. That'southward why this story is and then interesting: GeoS isn't as skilful equally the average American at geometry, it's equally good equally the average American at the SAT itself.
Specifically, this AI program was able to score 49% accurateness on official SAT geometry questions, and 61% in do questions. The 49% figure is basically identical to the boilerplate for real human test-takers. The program was not given digitized or particularly labeled versions of the test, but looked at the exact aforementioned question layout as real students. It read the writing. Information technology interpreted the diagrams. It figured out what the question was asking, and then it solved the problem. It only got the answer well-nigh one-half the time — which makes it roughly equally fallible as a human being.

To do this, the researchers had to nail together a whole assortment of different software technologies. GeoS uses optical character recognition (OCR) algorithms to read the text, and custom language processing to endeavor to understand what it reads. Geometry questions are structured to be difficult to parse, hiding important data as inferences and implications.

One intriguing implication of this research is that anytime, we might have algorithms quality-checking Saturday questions. Nosotros could take different AI programs intended to achieve different levels of success on average questions, perhaps fifty-fifty for different reasons. Run proposed new questions through them, and their relative performance could not only weed out bad questions for point to the source of the problem. BadAtReadingAI and BadAtLogicAI did as expected on the question, but BadAtDiagramsAI did terribly — maybe the drawing simply needs to be a little clearer.
This isn't a sign of the coming AI-pocalypse, or at to the lowest degree non a particularly firsthand sign; as dense as geometry questions might be, they're homogeneous and nowhere near as circuitous every bit something like conversational oral communication. Just this written report shows how the private tools available to AI researchers can be assembled to create rather total-featured artificial intelligences. When things volition really take off is when those same researchers showtime snapping together those amalgamations into something far more versatile and total-featured — something not entirely unlike a existent biological heed.
Source: https://www.extremetech.com/extreme/214896-this-ai-algorithm-can-match-the-average-american-on-real-sat-questions
Posted by: coopertives1980.blogspot.com

0 Response to "This AI algorithm can match the average American on real SAT questions"
Post a Comment