Subjective Answer Evaluation using Machine Learning

Abstract

  • This project proposes a novel approach that utilizes various machine learning, natural language processing techniques, to evaluate descriptive answers automatically.
  • Solution statements and keywords are used to evaluate answers, and a machine learning model is trained to predict the grades of answers.
  • With enough training, the machine learning model could be used as a standalone as well.
  • Experimentation produces an accuracy of 97% with the Proposed model.
  • Interestingly, artificial intelligence is utilized extensively as an efficient tool for predicting such a problem.
  • The proposed work utilizes the deep learning technique along with some preprocessing steps to improve the prediction of Answer Evaluation.

EXISTING SYSTEM

  • Much work has been done on the topic of subjective answers evaluation in one form or another, such as measuring Similarity between different texts, words, and even documents.
  • Finding the context behind the text and mapping it with the solution’s context, counting the noun-phrase in the documents, matching keywords in the answers, and so on.

DISADVANTAGES

  • Existing studies tend to Miss synonym Errors.
  • Existing studies tend to have an extensive range of possible lengths.
  • Existing studies tend to be randomly ordered among their sentences.

PROPOSED SYSTEM

  • This project proposes a new and improved way of evaluating descriptive question answers automatically using machine learning and natural language processing.
  • It uses 2 step approach to solving this problem.
  • First, the answers are evaluated using the solution and provided keywords using various Similarity-based techniques such as word mover’s distance.
  • This form of evaluation by machines is a big step forward in aiding the educational sector to perform their other duties efficiently and reduce the manual labor in trivial tasks such as comparing the answers with a correct solution.

PROJECT VIDEO

 

Software Requirements:

  • Front End – Anaconda IDE
  • Backend – SQL
  • Language – Python 3.8

Hardware Requirements:

  • Hard Disk: Greater than 500 GB
  • RAM: Greater than 4 GB
  • Processor: I3 and Above

 Including Packages

* Base Paper

* Complete Source Code

* Complete Documentation

* Complete Presentation Slides

* Flow Diagram

* Database File

* Screenshots

* Execution Procedure

* Readme File

* Addons

* Video Tutorials

 

subjective answer evaluation using machine learning, subjective-answer-evaluation using machine learning github, Subjective answer-evaluation using NLP GitHub, Automatic answer checker AI project GitHub, Descriptive answer evaluation using NLP, Subjective answer evaluation using NLP code, Automated answer script evaluation using NLP GitHub, Subjective answer evaluation system, Descriptive-answer-evaluation GitHub, Subjective answer evaluation system, Subjective answer evaluation using NLP code, Automatic answer Checker using Python