InQuest Document ML models excel at detecting malicious Office documents, spreadsheets, and presentations, whether they contain VBA macros or not, and across both OLE and OOXML formats. Our rigorous evaluation of these models shows outstanding results.
The F1 score, the harmonic mean of precision and recall, is an impressive 0.9989. This complex metric indicates how well our model balances identifying true positives and minimizing false positives. Our accuracy, which measures the correctness of the model’s predictions, is 0.9986, signifying that nearly all predictions are accurate.
In terms of precision, which shows the percentage of correctly identified malicious samples out of all samples flagged as malicious, our models achieved a remarkable 0.9992. This means that our models almost always correctly identify malicious documents. The recall, indicating how many actual malicious samples were correctly identified, stands at 0.9985, demonstrating the models' effectiveness in catching nearly every malicious document.
To contextualize these metrics, our models correctly identified 32,894 benign documents and 51,782 malicious ones, missing only 76 malicious documents (false negatives) and incorrectly flagging just 39 benign documents as malicious (false positives). These results highlight the robustness and reliability of our models in defending against threats posed by malicious documents.
In summary, the InQuest Document ML models deliver exceptional accuracy, precision, and recall in detecting malicious Office files, ensuring your organization's data remains secure without sacrificing performance.