README.md
| 1 | --- |
| 2 | base_model: sentence-transformers/all-MiniLM-L6-v2 |
| 3 | library_name: transformers.js |
| 4 | license: apache-2.0 |
| 5 | --- |
| 6 | |
| 7 | https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 with ONNX weights to be compatible with Transformers.js. |
| 8 | |
| 9 | ## Usage (Transformers.js) |
| 10 | |
| 11 | If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: |
| 12 | ```bash |
| 13 | npm i @huggingface/transformers |
| 14 | ``` |
| 15 | |
| 16 | You can then use the model to compute embeddings like this: |
| 17 | |
| 18 | ```js |
| 19 | import { pipeline } from '@huggingface/transformers'; |
| 20 | |
| 21 | // Create a feature-extraction pipeline |
| 22 | const extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2'); |
| 23 | |
| 24 | // Compute sentence embeddings |
| 25 | const sentences = ['This is an example sentence', 'Each sentence is converted']; |
| 26 | const output = await extractor(sentences, { pooling: 'mean', normalize: true }); |
| 27 | console.log(output); |
| 28 | // Tensor { |
| 29 | // dims: [ 2, 384 ], |
| 30 | // type: 'float32', |
| 31 | // data: Float32Array(768) [ 0.04592696577310562, 0.07328180968761444, ... ], |
| 32 | // size: 768 |
| 33 | // } |
| 34 | ``` |
| 35 | |
| 36 | You can convert this Tensor to a nested JavaScript array using `.tolist()`: |
| 37 | ```js |
| 38 | console.log(output.tolist()); |
| 39 | // [ |
| 40 | // [ 0.04592696577310562, 0.07328180968761444, 0.05400655046105385, ... ], |
| 41 | // [ 0.08188057690858841, 0.10760223120450974, -0.013241755776107311, ... ] |
| 42 | // ] |
| 43 | ``` |
| 44 | |
| 45 | Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |