Skip to main content Skip to footer
AI / ML Open Source HuggingFace

AI Models
for Scripture

Open-source machine learning models for biblical scholarship โ€” theological classification, cross-reference detection, language tools, and more.

"The heavens declare the glory of God; and the firmament sheweth his handywork." โ€” Psalm 19:1
9 Live
0 Planned
9 Total
Available Now

Live Models

Trained, evaluated, and deployed on HuggingFace. Try them now.

Theological Guardrails

Live

Three-model pipeline for classifying theological statements as orthodox, heterodox, or denominational distinctive based on the first six ecumenical councils.

RoBERTa-large + MiniLM-L12 + Flan-T5-base F1: 0.9971
ClassificationEmbeddingsExplanation
View Details โ†’ + Live Demo

Intertextual Reference Network

Live

Two-model pipeline for discovering biblical cross-references and classifying connection types (direct quote, allusion, typology, prophecy fulfillment, and more) across 31,102 KJV verses.

MiniLM-L12 + RoBERTa-base Ranking: 86.75% | Macro F1: 0.761
Cross-ReferencesTypologyEmbeddingsClassification
View Details โ†’ + Live Demo

Biblical Language Tutor

Live

Two-model seq2seq pipeline for morphological parsing and interlinear glossing of biblical Hebrew and Greek. Parses word-level grammar and generates word-by-word English translations from Macula data.

mT5-small (Parser + Glosser) Tag F1: 0.886 | BLEU: 22.06
HebrewGreekMorphologyInterlinear
View Details โ†’ + Live Demo

Manuscript Variant Analyzer

Live

Seq2seq classifier for textual criticism โ€” analyzes Greek NT manuscript variants across 6 critical editions (NA28, SBL, RP, ST, WH, Treg) to identify omissions, substitutions, and spelling differences.

mT5-small (Seq2Seq Classifier) Type Accuracy: 98.87% (v2)
Textual CriticismManuscriptsGreek NT
View Details โ†’ + Live Demo

Cross-Translation Semantic

Live

Sentence embedding model for comparing how 5 English Bible translations (KJV, ASV, YLT, Darby, AKJV) render the same verse. Detects semantic similarity and translation divergences across 31,103 verses.

paraphrase-multilingual-MiniLM-L12-v2 Pearson: 0.986 | Spearman: 0.492
TranslationEmbeddingsComparison
View Details โ†’ + Live Demo

Biblical Entity Recognizer

Live

Named entity recognition model for identifying people, places, groups, divine beings, and artifacts in biblical text. Trained on KJV with entity spans and types.

mT5-small (NER Seq2Seq) Entity F1: 0.9786
NERPeoplePlacesEntities
View Details โ†’ + Live Demo

Topical Passage Classifier

Live

Semantic search model for finding biblical passages by topic. Fine-tuned on Nave's Topical Bible and TSK cross-references to retrieve relevant verses for any theological query across all 66 books.

MiniLM-L12-v2 (Semantic Search) NDCG@10: 0.1831
TopicsTheologySemantic Search
View Details โ†’ + Live Demo

Passage Difficulty Simplifier

Live

Rates biblical passage reading difficulty and generates simplified paraphrases for complex verses. Helps new readers and children access challenging Scripture passages.

Flan-T5-base (Difficulty + Simplification) Difficulty Accuracy: retraining...
SimplificationAccessibilityReadability
View Details โ†’ + Live Demo

Evangelism & Apologetics

Live

Three-model RAG pipeline for answering apologetics questions with Scripture. Intent classifier routes questions through 5 categories, retriever fetches from 16K+ passages (Spurgeon, GotQuestions, church fathers, creation science, miracles), and Qwen3-14B LoRA generator (perplexity 4.08) produces grounded responses.

RoBERTa-base + MiniLM-L12 + Qwen3-14B LoRA Intent F1: 0.83 | Retriever Pearson: 0.90 | Generator PPL: 4.08
ApologeticsEvangelismRAGGeneration
View Details โ†’ + Live Demo
Coming Soon

Planned Models

Currently in research and development. Each model follows the same pipeline: research paper, dataset creation, training, evaluation, and HuggingFace deployment.

How Our Models Are Built

Every model follows a rigorous pipeline from research to deployment.

1

Research

Academic literature review and approach design

2

Dataset

Curated seeds expanded with augmentation and validation

3

Training

Fine-tuning on NVIDIA A100 80GB with bf16 precision

4

Deploy

Published to HuggingFace with interactive Gradio demos

All Models on HuggingFace

Every model is MIT licensed and free to use. Download weights, run inference, or integrate into your own applications.

View on HuggingFace