Datasets:
Processed Dataset with Multi-Vector BGE-M3 Scores
This dataset was automatically built using a custom pipeline to format data for Sentence Transformers training.
Source Dataset
- Original Dataset:
sentence-transformers/natural-questions - Split processed:
train - Anchor Column:
query - Positive Column:
answer - Negative Column:
negative
Hard Negative Mining
Hard negatives were aggressively mined from the dataset to improve model training robustness.
Bi-Encoder (Mining Model):
Snowflake/snowflake-arctic-embed-l-v2.0Cross-Encoder (Reranker):
None UsedNegatives per positive:
1Relative Margin applied:
0.06Absolute Margin applied:
0.04range min/max:
0 - 100negative score min/max:
0.1 - 0.9Sampling strategy:
topFAISS used:
True
Detailed Multi-Vector Scoring (BAAI/bge-m3)
The dataset includes granular similarity scores computed using BAAI/bge-m3.
These scores include Dense, Sparse (Lexical), and ColBERT-style (Multi-vector) representations, plus an aggregate score natively weighted as 0.4 * dense + 0.2 * sparse + 0.4 * colbert.
- Scoring Model:
BAAI/bge-m3 - Max Passage Length:
4000
New Features Added:
anchor_pos_aggregate_sim_m3,anchor_pos_list_sim_m3,anchor_neg_aggregate_sim_m3,anchor_neg_list_sim_m3,scores: Bi-Encoder rescoring predictions from the hard-negative mining phase.
- Downloads last month
- 11