You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Processed Dataset with Multi-Vector BGE-M3 Scores

This dataset was automatically built using a custom pipeline to format data for Sentence Transformers training.

Source Dataset

  • Original Dataset: sentence-transformers/natural-questions
  • Split processed: train
  • Anchor Column: query
  • Positive Column: answer
  • Negative Column: negative

Hard Negative Mining

Hard negatives were aggressively mined from the dataset to improve model training robustness.

  • Bi-Encoder (Mining Model): Snowflake/snowflake-arctic-embed-l-v2.0

  • Cross-Encoder (Reranker): None Used

  • Negatives per positive: 1

  • Relative Margin applied: 0.06

  • Absolute Margin applied: 0.04

  • range min/max: 0 - 100

  • negative score min/max: 0.1 - 0.9

  • Sampling strategy: top

  • FAISS used: True

Detailed Multi-Vector Scoring (BAAI/bge-m3)

The dataset includes granular similarity scores computed using BAAI/bge-m3. These scores include Dense, Sparse (Lexical), and ColBERT-style (Multi-vector) representations, plus an aggregate score natively weighted as 0.4 * dense + 0.2 * sparse + 0.4 * colbert.

  • Scoring Model: BAAI/bge-m3
  • Max Passage Length: 4000

New Features Added:

  • anchor_pos_aggregate_sim_m3, anchor_pos_list_sim_m3,

  • anchor_neg_aggregate_sim_m3, anchor_neg_list_sim_m3,

  • scores: Bi-Encoder rescoring predictions from the hard-negative mining phase.

Downloads last month
11